Dec 13 01:42:07 np0005558317 kernel: Linux version 5.14.0-648.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025
Dec 13 01:42:07 np0005558317 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 13 01:42:07 np0005558317 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 13 01:42:07 np0005558317 kernel: BIOS-provided physical RAM map:
Dec 13 01:42:07 np0005558317 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 13 01:42:07 np0005558317 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 13 01:42:07 np0005558317 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 13 01:42:07 np0005558317 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable
Dec 13 01:42:07 np0005558317 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved
Dec 13 01:42:07 np0005558317 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved
Dec 13 01:42:07 np0005558317 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
Dec 13 01:42:07 np0005558317 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 13 01:42:07 np0005558317 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 13 01:42:07 np0005558317 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000027fffffff] usable
Dec 13 01:42:07 np0005558317 kernel: NX (Execute Disable) protection: active
Dec 13 01:42:07 np0005558317 kernel: APIC: Static calls initialized
Dec 13 01:42:07 np0005558317 kernel: SMBIOS 2.8 present.
Dec 13 01:42:07 np0005558317 kernel: DMI: Red Hat OpenStack Compute/RHEL, BIOS 1.16.1-1.el9 04/01/2014
Dec 13 01:42:07 np0005558317 kernel: Hypervisor detected: KVM
Dec 13 01:42:07 np0005558317 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 13 01:42:07 np0005558317 kernel: kvm-clock: using sched offset of 3363036111 cycles
Dec 13 01:42:07 np0005558317 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 13 01:42:07 np0005558317 kernel: tsc: Detected 2445.404 MHz processor
Dec 13 01:42:07 np0005558317 kernel: last_pfn = 0x280000 max_arch_pfn = 0x400000000
Dec 13 01:42:07 np0005558317 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec 13 01:42:07 np0005558317 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 13 01:42:07 np0005558317 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000
Dec 13 01:42:07 np0005558317 kernel: found SMP MP-table at [mem 0x000f5b60-0x000f5b6f]
Dec 13 01:42:07 np0005558317 kernel: Using GB pages for direct mapping
Dec 13 01:42:07 np0005558317 kernel: RAMDISK: [mem 0x2d46a000-0x32a2cfff]
Dec 13 01:42:07 np0005558317 kernel: ACPI: Early table checksum verification disabled
Dec 13 01:42:07 np0005558317 kernel: ACPI: RSDP 0x00000000000F5B20 000014 (v00 BOCHS )
Dec 13 01:42:07 np0005558317 kernel: ACPI: RSDT 0x000000007FFE35EB 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 01:42:07 np0005558317 kernel: ACPI: FACP 0x000000007FFE3403 0000F4 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 01:42:07 np0005558317 kernel: ACPI: DSDT 0x000000007FFDFCC0 003743 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 01:42:07 np0005558317 kernel: ACPI: FACS 0x000000007FFDFC80 000040
Dec 13 01:42:07 np0005558317 kernel: ACPI: APIC 0x000000007FFE34F7 000090 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 01:42:07 np0005558317 kernel: ACPI: MCFG 0x000000007FFE3587 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 01:42:07 np0005558317 kernel: ACPI: WAET 0x000000007FFE35C3 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 01:42:07 np0005558317 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe3403-0x7ffe34f6]
Dec 13 01:42:07 np0005558317 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfcc0-0x7ffe3402]
Dec 13 01:42:07 np0005558317 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfc80-0x7ffdfcbf]
Dec 13 01:42:07 np0005558317 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe34f7-0x7ffe3586]
Dec 13 01:42:07 np0005558317 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe3587-0x7ffe35c2]
Dec 13 01:42:07 np0005558317 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe35c3-0x7ffe35ea]
Dec 13 01:42:07 np0005558317 kernel: No NUMA configuration found
Dec 13 01:42:07 np0005558317 kernel: Faking a node at [mem 0x0000000000000000-0x000000027fffffff]
Dec 13 01:42:07 np0005558317 kernel: NODE_DATA(0) allocated [mem 0x27ffd5000-0x27fffffff]
Dec 13 01:42:07 np0005558317 kernel: crashkernel reserved: 0x000000006f000000 - 0x000000007f000000 (256 MB)
Dec 13 01:42:07 np0005558317 kernel: Zone ranges:
Dec 13 01:42:07 np0005558317 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 13 01:42:07 np0005558317 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 13 01:42:07 np0005558317 kernel:  Normal   [mem 0x0000000100000000-0x000000027fffffff]
Dec 13 01:42:07 np0005558317 kernel:  Device   empty
Dec 13 01:42:07 np0005558317 kernel: Movable zone start for each node
Dec 13 01:42:07 np0005558317 kernel: Early memory node ranges
Dec 13 01:42:07 np0005558317 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 13 01:42:07 np0005558317 kernel:  node   0: [mem 0x0000000000100000-0x000000007ffdafff]
Dec 13 01:42:07 np0005558317 kernel:  node   0: [mem 0x0000000100000000-0x000000027fffffff]
Dec 13 01:42:07 np0005558317 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000027fffffff]
Dec 13 01:42:07 np0005558317 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 13 01:42:07 np0005558317 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 13 01:42:07 np0005558317 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 13 01:42:07 np0005558317 kernel: ACPI: PM-Timer IO Port: 0x608
Dec 13 01:42:07 np0005558317 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 13 01:42:07 np0005558317 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 13 01:42:07 np0005558317 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 13 01:42:07 np0005558317 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 13 01:42:07 np0005558317 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 13 01:42:07 np0005558317 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 13 01:42:07 np0005558317 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 13 01:42:07 np0005558317 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 13 01:42:07 np0005558317 kernel: TSC deadline timer available
Dec 13 01:42:07 np0005558317 kernel: CPU topo: Max. logical packages:   4
Dec 13 01:42:07 np0005558317 kernel: CPU topo: Max. logical dies:       4
Dec 13 01:42:07 np0005558317 kernel: CPU topo: Max. dies per package:   1
Dec 13 01:42:07 np0005558317 kernel: CPU topo: Max. threads per core:   1
Dec 13 01:42:07 np0005558317 kernel: CPU topo: Num. cores per package:     1
Dec 13 01:42:07 np0005558317 kernel: CPU topo: Num. threads per package:   1
Dec 13 01:42:07 np0005558317 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs
Dec 13 01:42:07 np0005558317 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec 13 01:42:07 np0005558317 kernel: kvm-guest: KVM setup pv remote TLB flush
Dec 13 01:42:07 np0005558317 kernel: kvm-guest: setup PV sched yield
Dec 13 01:42:07 np0005558317 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 13 01:42:07 np0005558317 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 13 01:42:07 np0005558317 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 13 01:42:07 np0005558317 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 13 01:42:07 np0005558317 kernel: PM: hibernation: Registered nosave memory: [mem 0x7ffdb000-0x7fffffff]
Dec 13 01:42:07 np0005558317 kernel: PM: hibernation: Registered nosave memory: [mem 0x80000000-0xafffffff]
Dec 13 01:42:07 np0005558317 kernel: PM: hibernation: Registered nosave memory: [mem 0xb0000000-0xbfffffff]
Dec 13 01:42:07 np0005558317 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfed1bfff]
Dec 13 01:42:07 np0005558317 kernel: PM: hibernation: Registered nosave memory: [mem 0xfed1c000-0xfed1ffff]
Dec 13 01:42:07 np0005558317 kernel: PM: hibernation: Registered nosave memory: [mem 0xfed20000-0xfeffbfff]
Dec 13 01:42:07 np0005558317 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 13 01:42:07 np0005558317 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 13 01:42:07 np0005558317 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 13 01:42:07 np0005558317 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices
Dec 13 01:42:07 np0005558317 kernel: Booting paravirtualized kernel on KVM
Dec 13 01:42:07 np0005558317 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 13 01:42:07 np0005558317 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1
Dec 13 01:42:07 np0005558317 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u524288
Dec 13 01:42:07 np0005558317 kernel: kvm-guest: PV spinlocks enabled
Dec 13 01:42:07 np0005558317 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Dec 13 01:42:07 np0005558317 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 13 01:42:07 np0005558317 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64", will be passed to user space.
Dec 13 01:42:07 np0005558317 kernel: random: crng init done
Dec 13 01:42:07 np0005558317 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 13 01:42:07 np0005558317 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 13 01:42:07 np0005558317 kernel: Fallback order for Node 0: 0 
Dec 13 01:42:07 np0005558317 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec 13 01:42:07 np0005558317 kernel: Policy zone: Normal
Dec 13 01:42:07 np0005558317 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 13 01:42:07 np0005558317 kernel: software IO TLB: area num 4.
Dec 13 01:42:07 np0005558317 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Dec 13 01:42:07 np0005558317 kernel: ftrace: allocating 49357 entries in 193 pages
Dec 13 01:42:07 np0005558317 kernel: ftrace: allocated 193 pages with 3 groups
Dec 13 01:42:07 np0005558317 kernel: Dynamic Preempt: voluntary
Dec 13 01:42:07 np0005558317 kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 13 01:42:07 np0005558317 kernel: rcu: #011RCU event tracing is enabled.
Dec 13 01:42:07 np0005558317 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=4.
Dec 13 01:42:07 np0005558317 kernel: #011Trampoline variant of Tasks RCU enabled.
Dec 13 01:42:07 np0005558317 kernel: #011Rude variant of Tasks RCU enabled.
Dec 13 01:42:07 np0005558317 kernel: #011Tracing variant of Tasks RCU enabled.
Dec 13 01:42:07 np0005558317 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 13 01:42:07 np0005558317 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Dec 13 01:42:07 np0005558317 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Dec 13 01:42:07 np0005558317 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Dec 13 01:42:07 np0005558317 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Dec 13 01:42:07 np0005558317 kernel: NR_IRQS: 524544, nr_irqs: 456, preallocated irqs: 16
Dec 13 01:42:07 np0005558317 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 13 01:42:07 np0005558317 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 13 01:42:07 np0005558317 kernel: Console: colour VGA+ 80x25
Dec 13 01:42:07 np0005558317 kernel: printk: console [ttyS0] enabled
Dec 13 01:42:07 np0005558317 kernel: ACPI: Core revision 20230331
Dec 13 01:42:07 np0005558317 kernel: APIC: Switch to symmetric I/O mode setup
Dec 13 01:42:07 np0005558317 kernel: x2apic enabled
Dec 13 01:42:07 np0005558317 kernel: APIC: Switched APIC routing to: physical x2apic
Dec 13 01:42:07 np0005558317 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask()
Dec 13 01:42:07 np0005558317 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself()
Dec 13 01:42:07 np0005558317 kernel: kvm-guest: setup PV IPIs
Dec 13 01:42:07 np0005558317 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 13 01:42:07 np0005558317 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404)
Dec 13 01:42:07 np0005558317 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 13 01:42:07 np0005558317 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 13 01:42:07 np0005558317 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 13 01:42:07 np0005558317 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 13 01:42:07 np0005558317 kernel: Spectre V2 : Mitigation: Retpolines
Dec 13 01:42:07 np0005558317 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec 13 01:42:07 np0005558317 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls
Dec 13 01:42:07 np0005558317 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 13 01:42:07 np0005558317 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 13 01:42:07 np0005558317 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec 13 01:42:07 np0005558317 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec 13 01:42:07 np0005558317 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec 13 01:42:07 np0005558317 kernel: Transient Scheduler Attacks: Vulnerable: No microcode
Dec 13 01:42:07 np0005558317 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 13 01:42:07 np0005558317 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 13 01:42:07 np0005558317 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 13 01:42:07 np0005558317 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Dec 13 01:42:07 np0005558317 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 13 01:42:07 np0005558317 kernel: x86/fpu: xstate_offset[9]:  832, xstate_sizes[9]:    8
Dec 13 01:42:07 np0005558317 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format.
Dec 13 01:42:07 np0005558317 kernel: Freeing SMP alternatives memory: 40K
Dec 13 01:42:07 np0005558317 kernel: pid_max: default: 32768 minimum: 301
Dec 13 01:42:07 np0005558317 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec 13 01:42:07 np0005558317 kernel: landlock: Up and running.
Dec 13 01:42:07 np0005558317 kernel: Yama: becoming mindful.
Dec 13 01:42:07 np0005558317 kernel: SELinux:  Initializing.
Dec 13 01:42:07 np0005558317 kernel: LSM support for eBPF active
Dec 13 01:42:07 np0005558317 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 13 01:42:07 np0005558317 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 13 01:42:07 np0005558317 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1)
Dec 13 01:42:07 np0005558317 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 13 01:42:07 np0005558317 kernel: ... version:                0
Dec 13 01:42:07 np0005558317 kernel: ... bit width:              48
Dec 13 01:42:07 np0005558317 kernel: ... generic registers:      6
Dec 13 01:42:07 np0005558317 kernel: ... value mask:             0000ffffffffffff
Dec 13 01:42:07 np0005558317 kernel: ... max period:             00007fffffffffff
Dec 13 01:42:07 np0005558317 kernel: ... fixed-purpose events:   0
Dec 13 01:42:07 np0005558317 kernel: ... event mask:             000000000000003f
Dec 13 01:42:07 np0005558317 kernel: signal: max sigframe size: 3376
Dec 13 01:42:07 np0005558317 kernel: rcu: Hierarchical SRCU implementation.
Dec 13 01:42:07 np0005558317 kernel: rcu: #011Max phase no-delay instances is 400.
Dec 13 01:42:07 np0005558317 kernel: smp: Bringing up secondary CPUs ...
Dec 13 01:42:07 np0005558317 kernel: smpboot: x86: Booting SMP configuration:
Dec 13 01:42:07 np0005558317 kernel: .... node  #0, CPUs:      #1 #2 #3
Dec 13 01:42:07 np0005558317 kernel: smp: Brought up 1 node, 4 CPUs
Dec 13 01:42:07 np0005558317 kernel: smpboot: Total of 4 processors activated (19563.23 BogoMIPS)
Dec 13 01:42:07 np0005558317 kernel: node 0 deferred pages initialised in 9ms
Dec 13 01:42:07 np0005558317 kernel: Memory: 7766284K/8388068K available (16384K kernel code, 5795K rwdata, 13916K rodata, 4192K init, 7164K bss, 617172K reserved, 0K cma-reserved)
Dec 13 01:42:07 np0005558317 kernel: devtmpfs: initialized
Dec 13 01:42:07 np0005558317 kernel: x86/mm: Memory block size: 128MB
Dec 13 01:42:07 np0005558317 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 13 01:42:07 np0005558317 kernel: futex hash table entries: 1024 (65536 bytes on 1 NUMA nodes, total 64 KiB, linear).
Dec 13 01:42:07 np0005558317 kernel: pinctrl core: initialized pinctrl subsystem
Dec 13 01:42:07 np0005558317 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 13 01:42:07 np0005558317 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec 13 01:42:07 np0005558317 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 13 01:42:07 np0005558317 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 13 01:42:07 np0005558317 kernel: audit: initializing netlink subsys (disabled)
Dec 13 01:42:07 np0005558317 kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 13 01:42:07 np0005558317 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 13 01:42:07 np0005558317 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 13 01:42:07 np0005558317 kernel: audit: type=2000 audit(1765608125.976:1): state=initialized audit_enabled=0 res=1
Dec 13 01:42:07 np0005558317 kernel: cpuidle: using governor menu
Dec 13 01:42:07 np0005558317 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 13 01:42:07 np0005558317 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff]
Dec 13 01:42:07 np0005558317 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry
Dec 13 01:42:07 np0005558317 kernel: PCI: Using configuration type 1 for base access
Dec 13 01:42:07 np0005558317 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 13 01:42:07 np0005558317 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 13 01:42:07 np0005558317 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec 13 01:42:07 np0005558317 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 13 01:42:07 np0005558317 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 13 01:42:07 np0005558317 kernel: Demotion targets for Node 0: null
Dec 13 01:42:07 np0005558317 kernel: cryptd: max_cpu_qlen set to 1000
Dec 13 01:42:07 np0005558317 kernel: ACPI: Added _OSI(Module Device)
Dec 13 01:42:07 np0005558317 kernel: ACPI: Added _OSI(Processor Device)
Dec 13 01:42:07 np0005558317 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 13 01:42:07 np0005558317 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 13 01:42:07 np0005558317 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 13 01:42:07 np0005558317 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec 13 01:42:07 np0005558317 kernel: ACPI: Interpreter enabled
Dec 13 01:42:07 np0005558317 kernel: ACPI: PM: (supports S0 S5)
Dec 13 01:42:07 np0005558317 kernel: ACPI: Using IOAPIC for interrupt routing
Dec 13 01:42:07 np0005558317 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 13 01:42:07 np0005558317 kernel: PCI: Using E820 reservations for host bridge windows
Dec 13 01:42:07 np0005558317 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F
Dec 13 01:42:07 np0005558317 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 13 01:42:07 np0005558317 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 13 01:42:07 np0005558317 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR DPC]
Dec 13 01:42:07 np0005558317 kernel: acpi PNP0A08:00: _OSC: OS now controls [SHPCHotplug PME AER PCIeCapability]
Dec 13 01:42:07 np0005558317 kernel: PCI host bridge to bus 0000:00
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:00: root bus resource [mem 0x280000000-0xa7fffffff window]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:01.0: BAR 0 [mem 0xf9800000-0xf9ffffff pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfc200000-0xfc203fff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.1: BAR 0 [mem 0xfea1a000-0xfea1afff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.2: BAR 0 [mem 0xfea1b000-0xfea1bfff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.3: BAR 0 [mem 0xfea1c000-0xfea1cfff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.4: BAR 0 [mem 0xfea1d000-0xfea1dfff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.5: BAR 0 [mem 0xfea1e000-0xfea1efff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.6: BAR 0 [mem 0xfea1f000-0xfea1ffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.7: BAR 0 [mem 0xfea20000-0xfea20fff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:04.0: BAR 0 [mem 0xfea21000-0xfea21fff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:1f.0: quirk: [io  0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:1f.2: BAR 4 [io  0xd040-0xd05f]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea22000-0xfea22fff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:1f.3: BAR 4 [io  0x0700-0x073f]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge
Dec 13 01:42:07 np0005558317 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfc800000-0xfc8000ff 64bit]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:02: extended config space not accessible
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [0] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [1] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [2] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [3] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [4] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [5] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [6] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [7] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [8] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [9] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [10] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [11] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [12] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [13] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [14] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [15] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [16] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [17] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [18] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [19] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [20] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [21] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [22] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [23] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [24] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [25] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [26] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [27] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [28] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [29] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [30] registered
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [31] registered
Dec 13 01:42:07 np0005558317 kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec 13 01:42:07 np0005558317 kernel: pci 0000:02:01.0: BAR 4 [io  0xc000-0xc01f]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [0-2] registered
Dec 13 01:42:07 np0005558317 kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Dec 13 01:42:07 np0005558317 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe840000-0xfe840fff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:03:00.0: BAR 4 [mem 0xfbe00000-0xfbe03fff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:03:00.0: ROM [mem 0xfe800000-0xfe83ffff pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [0-3] registered
Dec 13 01:42:07 np0005558317 kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint
Dec 13 01:42:07 np0005558317 kernel: pci 0000:04:00.0: BAR 1 [mem 0xfe600000-0xfe600fff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfbc00000-0xfbc03fff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [0-4] registered
Dec 13 01:42:07 np0005558317 kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint
Dec 13 01:42:07 np0005558317 kernel: pci 0000:05:00.0: BAR 4 [mem 0xfba00000-0xfba03fff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [0-5] registered
Dec 13 01:42:07 np0005558317 kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint
Dec 13 01:42:07 np0005558317 kernel: pci 0000:06:00.0: BAR 4 [mem 0xfb800000-0xfb803fff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [0-6] registered
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [0-7] registered
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [0-8] registered
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [0-9] registered
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [0-10] registered
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [0-11] registered
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [0-12] registered
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [0-13] registered
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [0-14] registered
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [0-15] registered
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [0-16] registered
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Dec 13 01:42:07 np0005558317 kernel: acpiphp: Slot [0-17] registered
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Dec 13 01:42:07 np0005558317 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 13 01:42:07 np0005558317 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 13 01:42:07 np0005558317 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 13 01:42:07 np0005558317 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 13 01:42:07 np0005558317 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10
Dec 13 01:42:07 np0005558317 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10
Dec 13 01:42:07 np0005558317 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11
Dec 13 01:42:07 np0005558317 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11
Dec 13 01:42:07 np0005558317 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16
Dec 13 01:42:07 np0005558317 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17
Dec 13 01:42:07 np0005558317 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18
Dec 13 01:42:07 np0005558317 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19
Dec 13 01:42:07 np0005558317 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20
Dec 13 01:42:07 np0005558317 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21
Dec 13 01:42:07 np0005558317 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22
Dec 13 01:42:07 np0005558317 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23
Dec 13 01:42:07 np0005558317 kernel: iommu: Default domain type: Translated
Dec 13 01:42:07 np0005558317 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 13 01:42:07 np0005558317 kernel: SCSI subsystem initialized
Dec 13 01:42:07 np0005558317 kernel: ACPI: bus type USB registered
Dec 13 01:42:07 np0005558317 kernel: usbcore: registered new interface driver usbfs
Dec 13 01:42:07 np0005558317 kernel: usbcore: registered new interface driver hub
Dec 13 01:42:07 np0005558317 kernel: usbcore: registered new device driver usb
Dec 13 01:42:07 np0005558317 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 13 01:42:07 np0005558317 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 13 01:42:07 np0005558317 kernel: PTP clock support registered
Dec 13 01:42:07 np0005558317 kernel: EDAC MC: Ver: 3.0.0
Dec 13 01:42:07 np0005558317 kernel: NetLabel: Initializing
Dec 13 01:42:07 np0005558317 kernel: NetLabel:  domain hash size = 128
Dec 13 01:42:07 np0005558317 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 13 01:42:07 np0005558317 kernel: NetLabel:  unlabeled traffic allowed by default
Dec 13 01:42:07 np0005558317 kernel: PCI: Using ACPI for IRQ routing
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:01.0: vgaarb: bridge control possible
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 13 01:42:07 np0005558317 kernel: vgaarb: loaded
Dec 13 01:42:07 np0005558317 kernel: clocksource: Switched to clocksource kvm-clock
Dec 13 01:42:07 np0005558317 kernel: VFS: Disk quotas dquot_6.6.0
Dec 13 01:42:07 np0005558317 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 13 01:42:07 np0005558317 kernel: pnp: PnP ACPI init
Dec 13 01:42:07 np0005558317 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved
Dec 13 01:42:07 np0005558317 kernel: pnp: PnP ACPI: found 5 devices
Dec 13 01:42:07 np0005558317 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 13 01:42:07 np0005558317 kernel: NET: Registered PF_INET protocol family
Dec 13 01:42:07 np0005558317 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 13 01:42:07 np0005558317 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec 13 01:42:07 np0005558317 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 13 01:42:07 np0005558317 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 13 01:42:07 np0005558317 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 13 01:42:07 np0005558317 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec 13 01:42:07 np0005558317 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec 13 01:42:07 np0005558317 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 13 01:42:07 np0005558317 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 13 01:42:07 np0005558317 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 13 01:42:07 np0005558317 kernel: NET: Registered PF_XDP protocol family
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x0fff] to [bus 03] add_size 1000
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.2: bridge window [io  0x1000-0x0fff] to [bus 04] add_size 1000
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.3: bridge window [io  0x1000-0x0fff] to [bus 05] add_size 1000
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.4: bridge window [io  0x1000-0x0fff] to [bus 06] add_size 1000
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.5: bridge window [io  0x1000-0x0fff] to [bus 07] add_size 1000
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.6: bridge window [io  0x1000-0x0fff] to [bus 08] add_size 1000
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.7: bridge window [io  0x1000-0x0fff] to [bus 09] add_size 1000
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.0: bridge window [io  0x1000-0x0fff] to [bus 0a] add_size 1000
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.1: bridge window [io  0x1000-0x0fff] to [bus 0b] add_size 1000
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.2: bridge window [io  0x1000-0x0fff] to [bus 0c] add_size 1000
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.3: bridge window [io  0x1000-0x0fff] to [bus 0d] add_size 1000
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.4: bridge window [io  0x1000-0x0fff] to [bus 0e] add_size 1000
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.5: bridge window [io  0x1000-0x0fff] to [bus 0f] add_size 1000
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.6: bridge window [io  0x1000-0x0fff] to [bus 10] add_size 1000
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.7: bridge window [io  0x1000-0x0fff] to [bus 11] add_size 1000
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x0fff] to [bus 12] add_size 1000
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x1fff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.2: bridge window [io  0x2000-0x2fff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.3: bridge window [io  0x3000-0x3fff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.4: bridge window [io  0x4000-0x4fff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.5: bridge window [io  0x5000-0x5fff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.6: bridge window [io  0x6000-0x6fff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.7: bridge window [io  0x7000-0x7fff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.0: bridge window [io  0x8000-0x8fff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.1: bridge window [io  0x9000-0x9fff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.2: bridge window [io  0xa000-0xafff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.3: bridge window [io  0xb000-0xbfff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.4: bridge window [io  0xe000-0xefff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.5: bridge window [io  0xf000-0xffff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: can't assign; no space
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: failed to assign
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: can't assign; no space
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: failed to assign
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: can't assign; no space
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: failed to assign
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x1fff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.7: bridge window [io  0x2000-0x2fff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.6: bridge window [io  0x3000-0x3fff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.5: bridge window [io  0x4000-0x4fff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.4: bridge window [io  0x5000-0x5fff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.3: bridge window [io  0x6000-0x6fff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.2: bridge window [io  0x7000-0x7fff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.1: bridge window [io  0x8000-0x8fff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.0: bridge window [io  0x9000-0x9fff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.7: bridge window [io  0xa000-0xafff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.6: bridge window [io  0xb000-0xbfff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.5: bridge window [io  0xe000-0xefff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.4: bridge window [io  0xf000-0xffff]: assigned
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: can't assign; no space
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: failed to assign
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: can't assign; no space
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: failed to assign
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: can't assign; no space
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: failed to assign
Dec 13 01:42:07 np0005558317 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.4:   bridge window [io  0xf000-0xffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.5:   bridge window [io  0xe000-0xefff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.6:   bridge window [io  0xb000-0xbfff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.7:   bridge window [io  0xa000-0xafff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.0:   bridge window [io  0x9000-0x9fff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.1:   bridge window [io  0x8000-0x8fff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.2:   bridge window [io  0x7000-0x7fff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.3:   bridge window [io  0x6000-0x6fff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.4:   bridge window [io  0x5000-0x5fff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.5:   bridge window [io  0x4000-0x4fff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.6:   bridge window [io  0x3000-0x3fff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.7:   bridge window [io  0x2000-0x2fff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:04.0:   bridge window [io  0x1000-0x1fff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Dec 13 01:42:07 np0005558317 kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:00: resource 9 [mem 0x280000000-0xa7fffffff window]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:01: resource 0 [io  0xc000-0xcfff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:01: resource 1 [mem 0xfc600000-0xfc9fffff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:01: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:02: resource 0 [io  0xc000-0xcfff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:02: resource 1 [mem 0xfc600000-0xfc7fffff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:02: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:03: resource 2 [mem 0xfbe00000-0xfbffffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:04: resource 2 [mem 0xfbc00000-0xfbdfffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:05: resource 2 [mem 0xfba00000-0xfbbfffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:06: resource 0 [io  0xf000-0xffff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:06: resource 2 [mem 0xfb800000-0xfb9fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:07: resource 0 [io  0xe000-0xefff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:07: resource 2 [mem 0xfb600000-0xfb7fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:08: resource 0 [io  0xb000-0xbfff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:08: resource 2 [mem 0xfb400000-0xfb5fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:09: resource 0 [io  0xa000-0xafff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:09: resource 2 [mem 0xfb200000-0xfb3fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:0a: resource 0 [io  0x9000-0x9fff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:0a: resource 1 [mem 0xfda00000-0xfdbfffff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:0a: resource 2 [mem 0xfb000000-0xfb1fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:0b: resource 0 [io  0x8000-0x8fff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd800000-0xfd9fffff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:0b: resource 2 [mem 0xfae00000-0xfaffffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:0c: resource 0 [io  0x7000-0x7fff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd600000-0xfd7fffff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:0c: resource 2 [mem 0xfac00000-0xfadfffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:0d: resource 0 [io  0x6000-0x6fff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:0d: resource 1 [mem 0xfd400000-0xfd5fffff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:0d: resource 2 [mem 0xfaa00000-0xfabfffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:0e: resource 0 [io  0x5000-0x5fff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:0e: resource 1 [mem 0xfd200000-0xfd3fffff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:0e: resource 2 [mem 0xfa800000-0xfa9fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:0f: resource 0 [io  0x4000-0x4fff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:0f: resource 1 [mem 0xfd000000-0xfd1fffff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:0f: resource 2 [mem 0xfa600000-0xfa7fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:10: resource 0 [io  0x3000-0x3fff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:10: resource 1 [mem 0xfce00000-0xfcffffff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:10: resource 2 [mem 0xfa400000-0xfa5fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:11: resource 0 [io  0x2000-0x2fff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:11: resource 1 [mem 0xfcc00000-0xfcdfffff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:11: resource 2 [mem 0xfa200000-0xfa3fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:12: resource 0 [io  0x1000-0x1fff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:12: resource 1 [mem 0xfca00000-0xfcbfffff]
Dec 13 01:42:07 np0005558317 kernel: pci_bus 0000:12: resource 2 [mem 0xfa000000-0xfa1fffff 64bit pref]
Dec 13 01:42:07 np0005558317 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22
Dec 13 01:42:07 np0005558317 kernel: PCI: CLS 0 bytes, default 64
Dec 13 01:42:07 np0005558317 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 13 01:42:07 np0005558317 kernel: software IO TLB: mapped [mem 0x000000006b000000-0x000000006f000000] (64MB)
Dec 13 01:42:07 np0005558317 kernel: ACPI: bus type thunderbolt registered
Dec 13 01:42:07 np0005558317 kernel: Trying to unpack rootfs image as initramfs...
Dec 13 01:42:07 np0005558317 kernel: Initialise system trusted keyrings
Dec 13 01:42:07 np0005558317 kernel: Key type blacklist registered
Dec 13 01:42:07 np0005558317 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec 13 01:42:07 np0005558317 kernel: zbud: loaded
Dec 13 01:42:07 np0005558317 kernel: integrity: Platform Keyring initialized
Dec 13 01:42:07 np0005558317 kernel: integrity: Machine keyring initialized
Dec 13 01:42:07 np0005558317 kernel: Freeing initrd memory: 87820K
Dec 13 01:42:07 np0005558317 kernel: NET: Registered PF_ALG protocol family
Dec 13 01:42:07 np0005558317 kernel: xor: automatically using best checksumming function   avx       
Dec 13 01:42:07 np0005558317 kernel: Key type asymmetric registered
Dec 13 01:42:07 np0005558317 kernel: Asymmetric key parser 'x509' registered
Dec 13 01:42:07 np0005558317 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 13 01:42:07 np0005558317 kernel: io scheduler mq-deadline registered
Dec 13 01:42:07 np0005558317 kernel: io scheduler kyber registered
Dec 13 01:42:07 np0005558317 kernel: io scheduler bfq registered
Dec 13 01:42:07 np0005558317 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31
Dec 13 01:42:07 np0005558317 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39
Dec 13 01:42:07 np0005558317 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40
Dec 13 01:42:07 np0005558317 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40
Dec 13 01:42:07 np0005558317 kernel: shpchp 0000:01:00.0: HPC vendor_id 1b36 device_id e ss_vid 0 ss_did 0
Dec 13 01:42:07 np0005558317 kernel: shpchp 0000:01:00.0: pci_hp_register failed with error -16
Dec 13 01:42:07 np0005558317 kernel: shpchp 0000:01:00.0: Slot initialization failed
Dec 13 01:42:07 np0005558317 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 13 01:42:07 np0005558317 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 13 01:42:07 np0005558317 kernel: ACPI: button: Power Button [PWRF]
Dec 13 01:42:07 np0005558317 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21
Dec 13 01:42:07 np0005558317 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 13 01:42:07 np0005558317 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 13 01:42:07 np0005558317 kernel: Non-volatile memory driver v1.3
Dec 13 01:42:07 np0005558317 kernel: rdac: device handler registered
Dec 13 01:42:07 np0005558317 kernel: hp_sw: device handler registered
Dec 13 01:42:07 np0005558317 kernel: emc: device handler registered
Dec 13 01:42:07 np0005558317 kernel: alua: device handler registered
Dec 13 01:42:07 np0005558317 kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller
Dec 13 01:42:07 np0005558317 kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1
Dec 13 01:42:07 np0005558317 kernel: uhci_hcd 0000:02:01.0: detected 2 ports
Dec 13 01:42:07 np0005558317 kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x0000c000
Dec 13 01:42:07 np0005558317 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 13 01:42:07 np0005558317 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 13 01:42:07 np0005558317 kernel: usb usb1: Product: UHCI Host Controller
Dec 13 01:42:07 np0005558317 kernel: usb usb1: Manufacturer: Linux 5.14.0-648.el9.x86_64 uhci_hcd
Dec 13 01:42:07 np0005558317 kernel: usb usb1: SerialNumber: 0000:02:01.0
Dec 13 01:42:07 np0005558317 kernel: hub 1-0:1.0: USB hub found
Dec 13 01:42:07 np0005558317 kernel: hub 1-0:1.0: 2 ports detected
Dec 13 01:42:07 np0005558317 kernel: usbcore: registered new interface driver usbserial_generic
Dec 13 01:42:07 np0005558317 kernel: usbserial: USB Serial support registered for generic
Dec 13 01:42:07 np0005558317 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 13 01:42:07 np0005558317 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 13 01:42:07 np0005558317 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 13 01:42:07 np0005558317 kernel: mousedev: PS/2 mouse device common for all mice
Dec 13 01:42:07 np0005558317 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 13 01:42:07 np0005558317 kernel: rtc_cmos 00:03: RTC can wake from S4
Dec 13 01:42:07 np0005558317 kernel: rtc_cmos 00:03: registered as rtc0
Dec 13 01:42:07 np0005558317 kernel: rtc_cmos 00:03: setting system clock to 2025-12-13T06:42:07 UTC (1765608127)
Dec 13 01:42:07 np0005558317 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram
Dec 13 01:42:07 np0005558317 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec 13 01:42:07 np0005558317 kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 13 01:42:07 np0005558317 kernel: usbcore: registered new interface driver usbhid
Dec 13 01:42:07 np0005558317 kernel: usbhid: USB HID core driver
Dec 13 01:42:07 np0005558317 kernel: drop_monitor: Initializing network drop monitor service
Dec 13 01:42:07 np0005558317 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 13 01:42:07 np0005558317 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 13 01:42:07 np0005558317 kernel: Initializing XFRM netlink socket
Dec 13 01:42:07 np0005558317 kernel: NET: Registered PF_INET6 protocol family
Dec 13 01:42:07 np0005558317 kernel: Segment Routing with IPv6
Dec 13 01:42:07 np0005558317 kernel: NET: Registered PF_PACKET protocol family
Dec 13 01:42:07 np0005558317 kernel: mpls_gso: MPLS GSO support
Dec 13 01:42:07 np0005558317 kernel: IPI shorthand broadcast: enabled
Dec 13 01:42:07 np0005558317 kernel: AVX2 version of gcm_enc/dec engaged.
Dec 13 01:42:07 np0005558317 kernel: AES CTR mode by8 optimization enabled
Dec 13 01:42:07 np0005558317 kernel: sched_clock: Marking stable (1121001876, 146024811)->(1370405560, -103378873)
Dec 13 01:42:07 np0005558317 kernel: registered taskstats version 1
Dec 13 01:42:07 np0005558317 kernel: Loading compiled-in X.509 certificates
Dec 13 01:42:07 np0005558317 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 13 01:42:07 np0005558317 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 13 01:42:07 np0005558317 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 13 01:42:07 np0005558317 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec 13 01:42:07 np0005558317 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec 13 01:42:07 np0005558317 kernel: Demotion targets for Node 0: null
Dec 13 01:42:07 np0005558317 kernel: page_owner is disabled
Dec 13 01:42:07 np0005558317 kernel: Key type .fscrypt registered
Dec 13 01:42:07 np0005558317 kernel: Key type fscrypt-provisioning registered
Dec 13 01:42:07 np0005558317 kernel: Key type big_key registered
Dec 13 01:42:07 np0005558317 kernel: Key type encrypted registered
Dec 13 01:42:07 np0005558317 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 13 01:42:07 np0005558317 kernel: Loading compiled-in module X.509 certificates
Dec 13 01:42:07 np0005558317 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 13 01:42:07 np0005558317 kernel: ima: Allocated hash algorithm: sha256
Dec 13 01:42:07 np0005558317 kernel: ima: No architecture policies found
Dec 13 01:42:07 np0005558317 kernel: evm: Initialising EVM extended attributes:
Dec 13 01:42:07 np0005558317 kernel: evm: security.selinux
Dec 13 01:42:07 np0005558317 kernel: evm: security.SMACK64 (disabled)
Dec 13 01:42:07 np0005558317 kernel: evm: security.SMACK64EXEC (disabled)
Dec 13 01:42:07 np0005558317 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 13 01:42:07 np0005558317 kernel: evm: security.SMACK64MMAP (disabled)
Dec 13 01:42:07 np0005558317 kernel: evm: security.apparmor (disabled)
Dec 13 01:42:07 np0005558317 kernel: evm: security.ima
Dec 13 01:42:07 np0005558317 kernel: evm: security.capability
Dec 13 01:42:07 np0005558317 kernel: evm: HMAC attrs: 0x1
Dec 13 01:42:07 np0005558317 kernel: Running certificate verification RSA selftest
Dec 13 01:42:07 np0005558317 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 13 01:42:07 np0005558317 kernel: Running certificate verification ECDSA selftest
Dec 13 01:42:07 np0005558317 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec 13 01:42:07 np0005558317 kernel: clk: Disabling unused clocks
Dec 13 01:42:07 np0005558317 kernel: Freeing unused decrypted memory: 2028K
Dec 13 01:42:07 np0005558317 kernel: Freeing unused kernel image (initmem) memory: 4192K
Dec 13 01:42:07 np0005558317 kernel: Write protecting the kernel read-only data: 30720k
Dec 13 01:42:07 np0005558317 kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Dec 13 01:42:07 np0005558317 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 13 01:42:07 np0005558317 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 13 01:42:07 np0005558317 kernel: Run /init as init process
Dec 13 01:42:07 np0005558317 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 13 01:42:07 np0005558317 systemd: Detected virtualization kvm.
Dec 13 01:42:07 np0005558317 systemd: Detected architecture x86-64.
Dec 13 01:42:07 np0005558317 systemd: Running in initrd.
Dec 13 01:42:07 np0005558317 systemd: No hostname configured, using default hostname.
Dec 13 01:42:07 np0005558317 systemd: Hostname set to <localhost>.
Dec 13 01:42:07 np0005558317 systemd: Initializing machine ID from VM UUID.
Dec 13 01:42:07 np0005558317 systemd: Queued start job for default target Initrd Default Target.
Dec 13 01:42:07 np0005558317 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec 13 01:42:07 np0005558317 systemd: Reached target Local Encrypted Volumes.
Dec 13 01:42:07 np0005558317 systemd: Reached target Initrd /usr File System.
Dec 13 01:42:07 np0005558317 systemd: Reached target Local File Systems.
Dec 13 01:42:07 np0005558317 systemd: Reached target Path Units.
Dec 13 01:42:07 np0005558317 systemd: Reached target Slice Units.
Dec 13 01:42:07 np0005558317 systemd: Reached target Swaps.
Dec 13 01:42:07 np0005558317 systemd: Reached target Timer Units.
Dec 13 01:42:07 np0005558317 systemd: Listening on D-Bus System Message Bus Socket.
Dec 13 01:42:07 np0005558317 systemd: Listening on Journal Socket (/dev/log).
Dec 13 01:42:07 np0005558317 systemd: Listening on Journal Socket.
Dec 13 01:42:07 np0005558317 systemd: Listening on udev Control Socket.
Dec 13 01:42:07 np0005558317 systemd: Listening on udev Kernel Socket.
Dec 13 01:42:07 np0005558317 systemd: Reached target Socket Units.
Dec 13 01:42:07 np0005558317 systemd: Starting Create List of Static Device Nodes...
Dec 13 01:42:07 np0005558317 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 13 01:42:07 np0005558317 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 13 01:42:07 np0005558317 kernel: usb 1-1: Product: QEMU USB Tablet
Dec 13 01:42:07 np0005558317 kernel: usb 1-1: Manufacturer: QEMU
Dec 13 01:42:07 np0005558317 kernel: usb 1-1: SerialNumber: 28754-0000:00:02.0:00.0:01.0-1
Dec 13 01:42:07 np0005558317 systemd: Starting Journal Service...
Dec 13 01:42:07 np0005558317 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 13 01:42:07 np0005558317 systemd: Starting Apply Kernel Variables...
Dec 13 01:42:07 np0005558317 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 13 01:42:07 np0005558317 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0
Dec 13 01:42:07 np0005558317 systemd: Starting Create System Users...
Dec 13 01:42:07 np0005558317 systemd: Starting Setup Virtual Console...
Dec 13 01:42:07 np0005558317 systemd: Finished Create List of Static Device Nodes.
Dec 13 01:42:07 np0005558317 systemd: Finished Apply Kernel Variables.
Dec 13 01:42:07 np0005558317 systemd: Finished Create System Users.
Dec 13 01:42:07 np0005558317 systemd: Starting Create Static Device Nodes in /dev...
Dec 13 01:42:07 np0005558317 systemd: Finished Create Static Device Nodes in /dev.
Dec 13 01:42:07 np0005558317 systemd-journald[280]: Journal started
Dec 13 01:42:07 np0005558317 systemd-journald[280]: Runtime Journal (/run/log/journal/bdf0d7c05eef46ac89a1b1ab7cc430f1) is 8.0M, max 153.6M, 145.6M free.
Dec 13 01:42:07 np0005558317 systemd-sysusers[283]: Creating group 'users' with GID 100.
Dec 13 01:42:07 np0005558317 systemd-sysusers[283]: Creating group 'dbus' with GID 81.
Dec 13 01:42:07 np0005558317 systemd-sysusers[283]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 13 01:42:07 np0005558317 systemd: Started Journal Service.
Dec 13 01:42:08 np0005558317 systemd[1]: Starting Create Volatile Files and Directories...
Dec 13 01:42:08 np0005558317 systemd[1]: Finished Create Volatile Files and Directories.
Dec 13 01:42:08 np0005558317 systemd[1]: Finished Setup Virtual Console.
Dec 13 01:42:08 np0005558317 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 13 01:42:08 np0005558317 systemd[1]: Starting dracut cmdline hook...
Dec 13 01:42:08 np0005558317 dracut-cmdline[296]: dracut-9 dracut-057-102.git20250818.el9
Dec 13 01:42:08 np0005558317 dracut-cmdline[296]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 13 01:42:08 np0005558317 systemd[1]: Finished dracut cmdline hook.
Dec 13 01:42:08 np0005558317 systemd[1]: Starting dracut pre-udev hook...
Dec 13 01:42:08 np0005558317 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 13 01:42:08 np0005558317 kernel: device-mapper: uevent: version 1.0.3
Dec 13 01:42:08 np0005558317 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec 13 01:42:08 np0005558317 kernel: RPC: Registered named UNIX socket transport module.
Dec 13 01:42:08 np0005558317 kernel: RPC: Registered udp transport module.
Dec 13 01:42:08 np0005558317 kernel: RPC: Registered tcp transport module.
Dec 13 01:42:08 np0005558317 kernel: RPC: Registered tcp-with-tls transport module.
Dec 13 01:42:08 np0005558317 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 13 01:42:08 np0005558317 rpc.statd[412]: Version 2.5.4 starting
Dec 13 01:42:08 np0005558317 rpc.statd[412]: Initializing NSM state
Dec 13 01:42:08 np0005558317 rpc.idmapd[417]: Setting log level to 0
Dec 13 01:42:08 np0005558317 systemd[1]: Finished dracut pre-udev hook.
Dec 13 01:42:08 np0005558317 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 13 01:42:08 np0005558317 systemd-udevd[430]: Using default interface naming scheme 'rhel-9.0'.
Dec 13 01:42:08 np0005558317 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 13 01:42:08 np0005558317 systemd[1]: Starting dracut pre-trigger hook...
Dec 13 01:42:08 np0005558317 systemd[1]: Finished dracut pre-trigger hook.
Dec 13 01:42:08 np0005558317 systemd[1]: Starting Coldplug All udev Devices...
Dec 13 01:42:08 np0005558317 systemd[1]: Created slice Slice /system/modprobe.
Dec 13 01:42:08 np0005558317 systemd[1]: Starting Load Kernel Module configfs...
Dec 13 01:42:08 np0005558317 systemd[1]: Finished Coldplug All udev Devices.
Dec 13 01:42:08 np0005558317 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 13 01:42:08 np0005558317 systemd[1]: Finished Load Kernel Module configfs.
Dec 13 01:42:08 np0005558317 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 13 01:42:08 np0005558317 systemd[1]: Reached target Network.
Dec 13 01:42:08 np0005558317 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 13 01:42:08 np0005558317 systemd[1]: Starting dracut initqueue hook...
Dec 13 01:42:08 np0005558317 kernel: virtio_blk virtio2: 4/0/0 default/read/poll queues
Dec 13 01:42:08 np0005558317 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec 13 01:42:08 np0005558317 kernel: vda: vda1
Dec 13 01:42:08 np0005558317 systemd-udevd[457]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 01:42:08 np0005558317 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16
Dec 13 01:42:08 np0005558317 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode
Dec 13 01:42:08 np0005558317 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f)
Dec 13 01:42:08 np0005558317 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only 
Dec 13 01:42:08 np0005558317 kernel: scsi host0: ahci
Dec 13 01:42:08 np0005558317 kernel: scsi host1: ahci
Dec 13 01:42:08 np0005558317 kernel: scsi host2: ahci
Dec 13 01:42:08 np0005558317 kernel: scsi host3: ahci
Dec 13 01:42:08 np0005558317 kernel: scsi host4: ahci
Dec 13 01:42:08 np0005558317 kernel: scsi host5: ahci
Dec 13 01:42:08 np0005558317 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22100 irq 49 lpm-pol 0
Dec 13 01:42:08 np0005558317 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22180 irq 49 lpm-pol 0
Dec 13 01:42:08 np0005558317 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22200 irq 49 lpm-pol 0
Dec 13 01:42:08 np0005558317 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22280 irq 49 lpm-pol 0
Dec 13 01:42:08 np0005558317 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22300 irq 49 lpm-pol 0
Dec 13 01:42:08 np0005558317 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22380 irq 49 lpm-pol 0
Dec 13 01:42:08 np0005558317 systemd[1]: Found device /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266.
Dec 13 01:42:08 np0005558317 systemd[1]: Reached target Initrd Root Device.
Dec 13 01:42:08 np0005558317 systemd[1]: Mounting Kernel Configuration File System...
Dec 13 01:42:08 np0005558317 systemd[1]: Mounted Kernel Configuration File System.
Dec 13 01:42:08 np0005558317 systemd[1]: Reached target System Initialization.
Dec 13 01:42:08 np0005558317 systemd[1]: Reached target Basic System.
Dec 13 01:42:08 np0005558317 kernel: ata2: SATA link down (SStatus 0 SControl 300)
Dec 13 01:42:08 np0005558317 kernel: ata3: SATA link down (SStatus 0 SControl 300)
Dec 13 01:42:08 np0005558317 kernel: ata5: SATA link down (SStatus 0 SControl 300)
Dec 13 01:42:08 np0005558317 kernel: ata4: SATA link down (SStatus 0 SControl 300)
Dec 13 01:42:08 np0005558317 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Dec 13 01:42:08 np0005558317 kernel: ata6: SATA link down (SStatus 0 SControl 300)
Dec 13 01:42:08 np0005558317 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 13 01:42:08 np0005558317 kernel: ata1.00: applying bridge limits
Dec 13 01:42:08 np0005558317 kernel: ata1.00: configured for UDMA/100
Dec 13 01:42:08 np0005558317 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 13 01:42:08 np0005558317 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 13 01:42:08 np0005558317 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 13 01:42:08 np0005558317 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 13 01:42:09 np0005558317 systemd[1]: Finished dracut initqueue hook.
Dec 13 01:42:09 np0005558317 systemd[1]: Reached target Preparation for Remote File Systems.
Dec 13 01:42:09 np0005558317 systemd[1]: Reached target Remote Encrypted Volumes.
Dec 13 01:42:09 np0005558317 systemd[1]: Reached target Remote File Systems.
Dec 13 01:42:09 np0005558317 systemd[1]: Starting dracut pre-mount hook...
Dec 13 01:42:09 np0005558317 systemd[1]: Finished dracut pre-mount hook.
Dec 13 01:42:09 np0005558317 systemd[1]: Starting File System Check on /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266...
Dec 13 01:42:09 np0005558317 systemd-fsck[524]: /usr/sbin/fsck.xfs: XFS file system.
Dec 13 01:42:09 np0005558317 systemd[1]: Finished File System Check on /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266.
Dec 13 01:42:09 np0005558317 systemd[1]: Mounting /sysroot...
Dec 13 01:42:09 np0005558317 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 13 01:42:09 np0005558317 kernel: XFS (vda1): Mounting V5 Filesystem cbdedf45-ed1d-4952-82a8-33a12c0ba266
Dec 13 01:42:09 np0005558317 kernel: XFS (vda1): Ending clean mount
Dec 13 01:42:09 np0005558317 systemd[1]: Mounted /sysroot.
Dec 13 01:42:09 np0005558317 systemd[1]: Reached target Initrd Root File System.
Dec 13 01:42:09 np0005558317 systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 13 01:42:09 np0005558317 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 13 01:42:09 np0005558317 systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 13 01:42:09 np0005558317 systemd[1]: Reached target Initrd File Systems.
Dec 13 01:42:09 np0005558317 systemd[1]: Reached target Initrd Default Target.
Dec 13 01:42:09 np0005558317 systemd[1]: Starting dracut mount hook...
Dec 13 01:42:09 np0005558317 systemd[1]: Finished dracut mount hook.
Dec 13 01:42:09 np0005558317 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 13 01:42:09 np0005558317 rpc.idmapd[417]: exiting on signal 15
Dec 13 01:42:09 np0005558317 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 13 01:42:09 np0005558317 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 13 01:42:09 np0005558317 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped target Network.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped target Timer Units.
Dec 13 01:42:09 np0005558317 systemd[1]: dbus.socket: Deactivated successfully.
Dec 13 01:42:09 np0005558317 systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 13 01:42:09 np0005558317 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped target Initrd Default Target.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped target Basic System.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped target Initrd Root Device.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped target Initrd /usr File System.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped target Path Units.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped target Remote File Systems.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped target Slice Units.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped target Socket Units.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped target System Initialization.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped target Local File Systems.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped target Swaps.
Dec 13 01:42:09 np0005558317 systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped dracut mount hook.
Dec 13 01:42:09 np0005558317 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped dracut pre-mount hook.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped target Local Encrypted Volumes.
Dec 13 01:42:09 np0005558317 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 13 01:42:09 np0005558317 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped dracut initqueue hook.
Dec 13 01:42:09 np0005558317 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped Apply Kernel Variables.
Dec 13 01:42:09 np0005558317 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped Create Volatile Files and Directories.
Dec 13 01:42:09 np0005558317 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped Coldplug All udev Devices.
Dec 13 01:42:09 np0005558317 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped dracut pre-trigger hook.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 13 01:42:09 np0005558317 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped Setup Virtual Console.
Dec 13 01:42:09 np0005558317 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 13 01:42:09 np0005558317 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 13 01:42:09 np0005558317 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 13 01:42:09 np0005558317 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 13 01:42:09 np0005558317 systemd[1]: Closed udev Control Socket.
Dec 13 01:42:09 np0005558317 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 13 01:42:09 np0005558317 systemd[1]: Closed udev Kernel Socket.
Dec 13 01:42:09 np0005558317 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped dracut pre-udev hook.
Dec 13 01:42:09 np0005558317 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped dracut cmdline hook.
Dec 13 01:42:09 np0005558317 systemd[1]: Starting Cleanup udev Database...
Dec 13 01:42:09 np0005558317 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 13 01:42:09 np0005558317 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped Create List of Static Device Nodes.
Dec 13 01:42:09 np0005558317 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 13 01:42:09 np0005558317 systemd[1]: Stopped Create System Users.
Dec 13 01:42:09 np0005558317 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 13 01:42:09 np0005558317 systemd[1]: Finished Cleanup udev Database.
Dec 13 01:42:09 np0005558317 systemd[1]: Reached target Switch Root.
Dec 13 01:42:09 np0005558317 systemd[1]: Starting Switch Root...
Dec 13 01:42:09 np0005558317 systemd[1]: Switching root.
Dec 13 01:42:09 np0005558317 systemd-journald[280]: Received SIGTERM from PID 1 (systemd).
Dec 13 01:42:09 np0005558317 systemd-journald[280]: Journal stopped
Dec 13 01:42:10 np0005558317 kernel: audit: type=1404 audit(1765608129.782:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec 13 01:42:10 np0005558317 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 01:42:10 np0005558317 kernel: SELinux:  policy capability open_perms=1
Dec 13 01:42:10 np0005558317 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 01:42:10 np0005558317 kernel: SELinux:  policy capability always_check_network=0
Dec 13 01:42:10 np0005558317 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 01:42:10 np0005558317 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 01:42:10 np0005558317 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 01:42:10 np0005558317 kernel: audit: type=1403 audit(1765608129.899:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 13 01:42:10 np0005558317 systemd: Successfully loaded SELinux policy in 120.941ms.
Dec 13 01:42:10 np0005558317 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.238ms.
Dec 13 01:42:10 np0005558317 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 13 01:42:10 np0005558317 systemd: Detected virtualization kvm.
Dec 13 01:42:10 np0005558317 systemd: Detected architecture x86-64.
Dec 13 01:42:10 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 01:42:10 np0005558317 systemd: initrd-switch-root.service: Deactivated successfully.
Dec 13 01:42:10 np0005558317 systemd: Stopped Switch Root.
Dec 13 01:42:10 np0005558317 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 13 01:42:10 np0005558317 systemd: Created slice Slice /system/getty.
Dec 13 01:42:10 np0005558317 systemd: Created slice Slice /system/serial-getty.
Dec 13 01:42:10 np0005558317 systemd: Created slice Slice /system/sshd-keygen.
Dec 13 01:42:10 np0005558317 systemd: Created slice User and Session Slice.
Dec 13 01:42:10 np0005558317 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec 13 01:42:10 np0005558317 systemd: Started Forward Password Requests to Wall Directory Watch.
Dec 13 01:42:10 np0005558317 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec 13 01:42:10 np0005558317 systemd: Reached target Local Encrypted Volumes.
Dec 13 01:42:10 np0005558317 systemd: Stopped target Switch Root.
Dec 13 01:42:10 np0005558317 systemd: Stopped target Initrd File Systems.
Dec 13 01:42:10 np0005558317 systemd: Stopped target Initrd Root File System.
Dec 13 01:42:10 np0005558317 systemd: Reached target Local Integrity Protected Volumes.
Dec 13 01:42:10 np0005558317 systemd: Reached target Path Units.
Dec 13 01:42:10 np0005558317 systemd: Reached target rpc_pipefs.target.
Dec 13 01:42:10 np0005558317 systemd: Reached target Slice Units.
Dec 13 01:42:10 np0005558317 systemd: Reached target Swaps.
Dec 13 01:42:10 np0005558317 systemd: Reached target Local Verity Protected Volumes.
Dec 13 01:42:10 np0005558317 systemd: Listening on RPCbind Server Activation Socket.
Dec 13 01:42:10 np0005558317 systemd: Reached target RPC Port Mapper.
Dec 13 01:42:10 np0005558317 systemd: Listening on Process Core Dump Socket.
Dec 13 01:42:10 np0005558317 systemd: Listening on initctl Compatibility Named Pipe.
Dec 13 01:42:10 np0005558317 systemd: Listening on udev Control Socket.
Dec 13 01:42:10 np0005558317 systemd: Listening on udev Kernel Socket.
Dec 13 01:42:10 np0005558317 systemd: Mounting Huge Pages File System...
Dec 13 01:42:10 np0005558317 systemd: Mounting POSIX Message Queue File System...
Dec 13 01:42:10 np0005558317 systemd: Mounting Kernel Debug File System...
Dec 13 01:42:10 np0005558317 systemd: Mounting Kernel Trace File System...
Dec 13 01:42:10 np0005558317 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 13 01:42:10 np0005558317 systemd: Starting Create List of Static Device Nodes...
Dec 13 01:42:10 np0005558317 systemd: Starting Load Kernel Module configfs...
Dec 13 01:42:10 np0005558317 systemd: Starting Load Kernel Module drm...
Dec 13 01:42:10 np0005558317 systemd: Starting Load Kernel Module efi_pstore...
Dec 13 01:42:10 np0005558317 systemd: Starting Load Kernel Module fuse...
Dec 13 01:42:10 np0005558317 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec 13 01:42:10 np0005558317 systemd: systemd-fsck-root.service: Deactivated successfully.
Dec 13 01:42:10 np0005558317 systemd: Stopped File System Check on Root Device.
Dec 13 01:42:10 np0005558317 systemd: Stopped Journal Service.
Dec 13 01:42:10 np0005558317 systemd: Starting Journal Service...
Dec 13 01:42:10 np0005558317 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 13 01:42:10 np0005558317 systemd: Starting Generate network units from Kernel command line...
Dec 13 01:42:10 np0005558317 kernel: fuse: init (API version 7.37)
Dec 13 01:42:10 np0005558317 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 01:42:10 np0005558317 systemd: Starting Remount Root and Kernel File Systems...
Dec 13 01:42:10 np0005558317 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 13 01:42:10 np0005558317 systemd: Starting Apply Kernel Variables...
Dec 13 01:42:10 np0005558317 systemd: Starting Coldplug All udev Devices...
Dec 13 01:42:10 np0005558317 systemd-journald[650]: Journal started
Dec 13 01:42:10 np0005558317 systemd-journald[650]: Runtime Journal (/run/log/journal/64f1d6692049d8be5e8b216cc203502c) is 8.0M, max 153.6M, 145.6M free.
Dec 13 01:42:10 np0005558317 systemd[1]: Queued start job for default target Multi-User System.
Dec 13 01:42:10 np0005558317 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 13 01:42:10 np0005558317 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec 13 01:42:10 np0005558317 systemd: Started Journal Service.
Dec 13 01:42:10 np0005558317 systemd[1]: Mounted Huge Pages File System.
Dec 13 01:42:10 np0005558317 systemd[1]: Mounted POSIX Message Queue File System.
Dec 13 01:42:10 np0005558317 systemd[1]: Mounted Kernel Debug File System.
Dec 13 01:42:10 np0005558317 systemd[1]: Mounted Kernel Trace File System.
Dec 13 01:42:10 np0005558317 systemd[1]: Finished Create List of Static Device Nodes.
Dec 13 01:42:10 np0005558317 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 13 01:42:10 np0005558317 systemd[1]: Finished Load Kernel Module configfs.
Dec 13 01:42:10 np0005558317 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 01:42:10 np0005558317 systemd[1]: Finished Load Kernel Module efi_pstore.
Dec 13 01:42:10 np0005558317 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 13 01:42:10 np0005558317 systemd[1]: Finished Load Kernel Module fuse.
Dec 13 01:42:10 np0005558317 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec 13 01:42:10 np0005558317 systemd[1]: Finished Generate network units from Kernel command line.
Dec 13 01:42:10 np0005558317 systemd[1]: Finished Remount Root and Kernel File Systems.
Dec 13 01:42:10 np0005558317 systemd[1]: Finished Apply Kernel Variables.
Dec 13 01:42:10 np0005558317 systemd[1]: Mounting FUSE Control File System...
Dec 13 01:42:10 np0005558317 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 13 01:42:10 np0005558317 systemd[1]: Starting Rebuild Hardware Database...
Dec 13 01:42:10 np0005558317 systemd[1]: Starting Flush Journal to Persistent Storage...
Dec 13 01:42:10 np0005558317 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 01:42:10 np0005558317 kernel: ACPI: bus type drm_connector registered
Dec 13 01:42:10 np0005558317 systemd[1]: Starting Load/Save OS Random Seed...
Dec 13 01:42:10 np0005558317 systemd[1]: Starting Create System Users...
Dec 13 01:42:10 np0005558317 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 13 01:42:10 np0005558317 systemd[1]: Finished Load Kernel Module drm.
Dec 13 01:42:10 np0005558317 systemd-journald[650]: Runtime Journal (/run/log/journal/64f1d6692049d8be5e8b216cc203502c) is 8.0M, max 153.6M, 145.6M free.
Dec 13 01:42:10 np0005558317 systemd-journald[650]: Received client request to flush runtime journal.
Dec 13 01:42:10 np0005558317 systemd[1]: Mounted FUSE Control File System.
Dec 13 01:42:10 np0005558317 systemd[1]: Finished Load/Save OS Random Seed.
Dec 13 01:42:10 np0005558317 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 13 01:42:10 np0005558317 systemd[1]: Finished Flush Journal to Persistent Storage.
Dec 13 01:42:10 np0005558317 systemd[1]: Finished Create System Users.
Dec 13 01:42:10 np0005558317 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 13 01:42:10 np0005558317 systemd[1]: Finished Coldplug All udev Devices.
Dec 13 01:42:10 np0005558317 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 13 01:42:10 np0005558317 systemd[1]: Reached target Preparation for Local File Systems.
Dec 13 01:42:10 np0005558317 systemd[1]: Reached target Local File Systems.
Dec 13 01:42:10 np0005558317 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec 13 01:42:10 np0005558317 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec 13 01:42:10 np0005558317 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 13 01:42:10 np0005558317 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec 13 01:42:10 np0005558317 systemd[1]: Starting Automatic Boot Loader Update...
Dec 13 01:42:10 np0005558317 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec 13 01:42:10 np0005558317 systemd[1]: Starting Create Volatile Files and Directories...
Dec 13 01:42:10 np0005558317 bootctl[667]: Couldn't find EFI system partition, skipping.
Dec 13 01:42:10 np0005558317 systemd[1]: Finished Automatic Boot Loader Update.
Dec 13 01:42:10 np0005558317 systemd[1]: Finished Create Volatile Files and Directories.
Dec 13 01:42:10 np0005558317 systemd[1]: Starting Security Auditing Service...
Dec 13 01:42:10 np0005558317 systemd[1]: Starting RPC Bind...
Dec 13 01:42:10 np0005558317 systemd[1]: Starting Rebuild Journal Catalog...
Dec 13 01:42:10 np0005558317 auditd[673]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec 13 01:42:10 np0005558317 auditd[673]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec 13 01:42:10 np0005558317 systemd[1]: Finished Rebuild Journal Catalog.
Dec 13 01:42:10 np0005558317 systemd[1]: Started RPC Bind.
Dec 13 01:42:10 np0005558317 augenrules[678]: /sbin/augenrules: No change
Dec 13 01:42:10 np0005558317 augenrules[693]: No rules
Dec 13 01:42:10 np0005558317 augenrules[693]: enabled 1
Dec 13 01:42:10 np0005558317 augenrules[693]: failure 1
Dec 13 01:42:10 np0005558317 augenrules[693]: pid 673
Dec 13 01:42:10 np0005558317 augenrules[693]: rate_limit 0
Dec 13 01:42:10 np0005558317 augenrules[693]: backlog_limit 8192
Dec 13 01:42:10 np0005558317 augenrules[693]: lost 0
Dec 13 01:42:10 np0005558317 augenrules[693]: backlog 0
Dec 13 01:42:10 np0005558317 augenrules[693]: backlog_wait_time 60000
Dec 13 01:42:10 np0005558317 augenrules[693]: backlog_wait_time_actual 0
Dec 13 01:42:10 np0005558317 augenrules[693]: enabled 1
Dec 13 01:42:10 np0005558317 augenrules[693]: failure 1
Dec 13 01:42:10 np0005558317 augenrules[693]: pid 673
Dec 13 01:42:10 np0005558317 augenrules[693]: rate_limit 0
Dec 13 01:42:10 np0005558317 augenrules[693]: backlog_limit 8192
Dec 13 01:42:10 np0005558317 augenrules[693]: lost 0
Dec 13 01:42:10 np0005558317 augenrules[693]: backlog 0
Dec 13 01:42:10 np0005558317 augenrules[693]: backlog_wait_time 60000
Dec 13 01:42:10 np0005558317 augenrules[693]: backlog_wait_time_actual 0
Dec 13 01:42:10 np0005558317 augenrules[693]: enabled 1
Dec 13 01:42:10 np0005558317 augenrules[693]: failure 1
Dec 13 01:42:10 np0005558317 augenrules[693]: pid 673
Dec 13 01:42:10 np0005558317 augenrules[693]: rate_limit 0
Dec 13 01:42:10 np0005558317 augenrules[693]: backlog_limit 8192
Dec 13 01:42:10 np0005558317 augenrules[693]: lost 0
Dec 13 01:42:10 np0005558317 augenrules[693]: backlog 0
Dec 13 01:42:10 np0005558317 augenrules[693]: backlog_wait_time 60000
Dec 13 01:42:10 np0005558317 augenrules[693]: backlog_wait_time_actual 0
Dec 13 01:42:10 np0005558317 systemd[1]: Started Security Auditing Service.
Dec 13 01:42:10 np0005558317 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec 13 01:42:10 np0005558317 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec 13 01:42:10 np0005558317 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec 13 01:42:10 np0005558317 systemd[1]: Finished Rebuild Hardware Database.
Dec 13 01:42:10 np0005558317 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 13 01:42:10 np0005558317 systemd[1]: Starting Update is Completed...
Dec 13 01:42:10 np0005558317 systemd[1]: Finished Update is Completed.
Dec 13 01:42:10 np0005558317 systemd-udevd[701]: Using default interface naming scheme 'rhel-9.0'.
Dec 13 01:42:10 np0005558317 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 13 01:42:10 np0005558317 systemd[1]: Reached target System Initialization.
Dec 13 01:42:10 np0005558317 systemd[1]: Started dnf makecache --timer.
Dec 13 01:42:10 np0005558317 systemd[1]: Started Daily rotation of log files.
Dec 13 01:42:10 np0005558317 systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec 13 01:42:10 np0005558317 systemd[1]: Reached target Timer Units.
Dec 13 01:42:10 np0005558317 systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 13 01:42:10 np0005558317 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec 13 01:42:10 np0005558317 systemd[1]: Reached target Socket Units.
Dec 13 01:42:10 np0005558317 systemd[1]: Starting D-Bus System Message Bus...
Dec 13 01:42:10 np0005558317 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 01:42:10 np0005558317 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec 13 01:42:10 np0005558317 systemd[1]: Starting Load Kernel Module configfs...
Dec 13 01:42:10 np0005558317 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 13 01:42:10 np0005558317 systemd[1]: Finished Load Kernel Module configfs.
Dec 13 01:42:10 np0005558317 systemd-udevd[713]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 01:42:10 np0005558317 systemd[1]: Started D-Bus System Message Bus.
Dec 13 01:42:10 np0005558317 systemd[1]: Reached target Basic System.
Dec 13 01:42:10 np0005558317 dbus-broker-lau[727]: Ready
Dec 13 01:42:10 np0005558317 systemd[1]: Starting NTP client/server...
Dec 13 01:42:10 np0005558317 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec 13 01:42:10 np0005558317 systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec 13 01:42:10 np0005558317 systemd[1]: Starting IPv4 firewall with iptables...
Dec 13 01:42:10 np0005558317 systemd[1]: Started irqbalance daemon.
Dec 13 01:42:10 np0005558317 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec 13 01:42:10 np0005558317 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 13 01:42:10 np0005558317 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 13 01:42:10 np0005558317 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 13 01:42:10 np0005558317 systemd[1]: Reached target sshd-keygen.target.
Dec 13 01:42:10 np0005558317 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec 13 01:42:10 np0005558317 systemd[1]: Reached target User and Group Name Lookups.
Dec 13 01:42:10 np0005558317 systemd[1]: Starting User Login Management...
Dec 13 01:42:10 np0005558317 systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec 13 01:42:10 np0005558317 chronyd[754]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 13 01:42:10 np0005558317 chronyd[754]: Loaded 0 symmetric keys
Dec 13 01:42:10 np0005558317 chronyd[754]: Using right/UTC timezone to obtain leap second data
Dec 13 01:42:10 np0005558317 chronyd[754]: Loaded seccomp filter (level 2)
Dec 13 01:42:10 np0005558317 systemd[1]: Started NTP client/server.
Dec 13 01:42:10 np0005558317 systemd-logind[745]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 13 01:42:10 np0005558317 systemd-logind[745]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 13 01:42:10 np0005558317 systemd-logind[745]: New seat seat0.
Dec 13 01:42:10 np0005558317 systemd[1]: Started User Login Management.
Dec 13 01:42:10 np0005558317 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec 13 01:42:10 np0005558317 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec 13 01:42:10 np0005558317 kernel: lpc_ich 0000:00:1f.0: I/O space for GPIO uninitialized
Dec 13 01:42:10 np0005558317 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0
Dec 13 01:42:10 np0005558317 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec 13 01:42:11 np0005558317 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console
Dec 13 01:42:11 np0005558317 kernel: iTCO_vendor_support: vendor-support=0
Dec 13 01:42:11 np0005558317 kernel: Console: switching to colour dummy device 80x25
Dec 13 01:42:11 np0005558317 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec 13 01:42:11 np0005558317 kernel: [drm] features: -context_init
Dec 13 01:42:11 np0005558317 kernel: iTCO_wdt iTCO_wdt.1.auto: Found a ICH9 TCO device (Version=2, TCOBASE=0x0660)
Dec 13 01:42:11 np0005558317 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
Dec 13 01:42:11 np0005558317 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec 13 01:42:11 np0005558317 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec 13 01:42:11 np0005558317 kernel: [drm] number of scanouts: 1
Dec 13 01:42:11 np0005558317 kernel: [drm] number of cap sets: 0
Dec 13 01:42:11 np0005558317 iptables.init[739]: iptables: Applying firewall rules: [  OK  ]
Dec 13 01:42:11 np0005558317 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0
Dec 13 01:42:11 np0005558317 systemd[1]: Finished IPv4 firewall with iptables.
Dec 13 01:42:11 np0005558317 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec 13 01:42:11 np0005558317 kernel: Console: switching to colour frame buffer device 160x50
Dec 13 01:42:11 np0005558317 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec 13 01:42:11 np0005558317 kernel: iTCO_wdt iTCO_wdt.1.auto: initialized. heartbeat=30 sec (nowayout=0)
Dec 13 01:42:11 np0005558317 kernel: kvm_amd: TSC scaling supported
Dec 13 01:42:11 np0005558317 kernel: kvm_amd: Nested Virtualization enabled
Dec 13 01:42:11 np0005558317 kernel: kvm_amd: Nested Paging enabled
Dec 13 01:42:11 np0005558317 kernel: kvm_amd: LBR virtualization supported
Dec 13 01:42:11 np0005558317 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported
Dec 13 01:42:11 np0005558317 kernel: kvm_amd: Virtual GIF supported
Dec 13 01:42:11 np0005558317 cloud-init[794]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 13 Dec 2025 06:42:11 +0000. Up 4.93 seconds.
Dec 13 01:42:11 np0005558317 systemd[1]: run-cloud\x2dinit-tmp-tmpzpl1w9jx.mount: Deactivated successfully.
Dec 13 01:42:11 np0005558317 systemd[1]: Starting Hostname Service...
Dec 13 01:42:11 np0005558317 systemd[1]: Started Hostname Service.
Dec 13 01:42:11 np0005558317 systemd-hostnamed[808]: Hostname set to <np0005558317> (static)
Dec 13 01:42:11 np0005558317 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec 13 01:42:11 np0005558317 systemd[1]: Reached target Preparation for Network.
Dec 13 01:42:11 np0005558317 systemd[1]: Starting Network Manager...
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7334] NetworkManager (version 1.54.2-1.el9) is starting... (boot:7e7986d9-0598-4067-a630-6e2fad28fcbc)
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7338] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7417] manager[0x55cb2471e000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7444] hostname: hostname: using hostnamed
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7444] hostname: static hostname changed from (none) to "np0005558317"
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7446] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7545] manager[0x55cb2471e000]: rfkill: Wi-Fi hardware radio set enabled
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7545] manager[0x55cb2471e000]: rfkill: WWAN hardware radio set enabled
Dec 13 01:42:11 np0005558317 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7589] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7589] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7590] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7590] manager: Networking is enabled by state file
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7591] settings: Loaded settings plugin: keyfile (internal)
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7613] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7628] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7637] dhcp: init: Using DHCP client 'internal'
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7639] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7648] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7654] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7658] device (lo): Activation: starting connection 'lo' (08c9145e-912a-4b86-86a1-5730fa82ae86)
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7665] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7666] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7686] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7688] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7689] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7690] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7691] device (eth0): carrier: link connected
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7692] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7696] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 13 01:42:11 np0005558317 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7701] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7704] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7704] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7706] manager: NetworkManager state is now CONNECTING
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7707] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7712] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7717] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 01:42:11 np0005558317 systemd[1]: Started Network Manager.
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7720] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Dec 13 01:42:11 np0005558317 systemd[1]: Reached target Network.
Dec 13 01:42:11 np0005558317 systemd[1]: Starting Network Manager Wait Online...
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7766] dhcp4 (eth0): state changed new lease, address=192.168.25.195
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7774] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 13 01:42:11 np0005558317 systemd[1]: Starting GSSAPI Proxy Daemon...
Dec 13 01:42:11 np0005558317 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7869] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7883] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 13 01:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7888] device (lo): Activation: successful, device activated.
Dec 13 01:42:11 np0005558317 systemd[1]: Started GSSAPI Proxy Daemon.
Dec 13 01:42:11 np0005558317 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 13 01:42:11 np0005558317 systemd[1]: Reached target NFS client services.
Dec 13 01:42:11 np0005558317 systemd[1]: Reached target Preparation for Remote File Systems.
Dec 13 01:42:11 np0005558317 systemd[1]: Reached target Remote File Systems.
Dec 13 01:42:11 np0005558317 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 01:42:12 np0005558317 NetworkManager[812]: <info>  [1765608132.8789] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 01:42:13 np0005558317 NetworkManager[812]: <info>  [1765608133.9691] dhcp6 (eth0): state changed new lease, address=2001:db8::1cf
Dec 13 01:42:15 np0005558317 NetworkManager[812]: <info>  [1765608135.5679] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 01:42:15 np0005558317 NetworkManager[812]: <info>  [1765608135.5710] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 01:42:15 np0005558317 NetworkManager[812]: <info>  [1765608135.5712] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 01:42:15 np0005558317 NetworkManager[812]: <info>  [1765608135.5715] manager: NetworkManager state is now CONNECTED_SITE
Dec 13 01:42:15 np0005558317 NetworkManager[812]: <info>  [1765608135.5718] device (eth0): Activation: successful, device activated.
Dec 13 01:42:15 np0005558317 NetworkManager[812]: <info>  [1765608135.5723] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 13 01:42:15 np0005558317 NetworkManager[812]: <info>  [1765608135.5725] manager: startup complete
Dec 13 01:42:15 np0005558317 systemd[1]: Finished Network Manager Wait Online.
Dec 13 01:42:15 np0005558317 systemd[1]: Starting Cloud-init: Network Stage...
Dec 13 01:42:15 np0005558317 cloud-init[878]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 13 Dec 2025 06:42:15 +0000. Up 9.41 seconds.
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: ++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: | Device |  Up  |           Address           |      Mask     | Scope  |     Hw-Address    |
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: |  eth0  | True |        192.168.25.195       | 255.255.255.0 | global | fa:16:3e:b1:0c:2a |
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: |  eth0  | True |      2001:db8::1cf/128      |       .       | global | fa:16:3e:b1:0c:2a |
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: |  eth0  | True | fe80::f816:3eff:feb1:c2a/64 |       .       |  link  | fa:16:3e:b1:0c:2a |
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: |   lo   | True |          127.0.0.1          |   255.0.0.0   |  host  |         .         |
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: |   lo   | True |           ::1/128           |       .       |  host  |         .         |
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: | Route |   Destination   |   Gateway    |     Genmask     | Interface | Flags |
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: |   0   |     0.0.0.0     | 192.168.25.1 |     0.0.0.0     |    eth0   |   UG  |
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: |   1   | 169.254.169.254 | 192.168.25.2 | 255.255.255.255 |    eth0   |  UGH  |
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: |   2   |   192.168.25.0  |   0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: ++++++++++++++++++++++Route IPv6 info++++++++++++++++++++++
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: +-------+---------------+-------------+-----------+-------+
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: | Route |  Destination  |   Gateway   | Interface | Flags |
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: +-------+---------------+-------------+-----------+-------+
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: |   1   |  2001:db8::1  |      ::     |    eth0   |   U   |
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: |   2   | 2001:db8::1cf |      ::     |    eth0   |   U   |
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: |   3   |   fe80::/64   |      ::     |    eth0   |   U   |
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: |   4   |      ::/0     | 2001:db8::1 |    eth0   |   UG  |
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: |   6   |     local     |      ::     |    eth0   |   U   |
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: |   7   |     local     |      ::     |    eth0   |   U   |
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: |   8   |   multicast   |      ::     |    eth0   |   U   |
Dec 13 01:42:15 np0005558317 cloud-init[878]: ci-info: +-------+---------------+-------------+-----------+-------+
Dec 13 01:42:16 np0005558317 cloud-init[878]: Generating public/private rsa key pair.
Dec 13 01:42:16 np0005558317 cloud-init[878]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec 13 01:42:16 np0005558317 cloud-init[878]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec 13 01:42:16 np0005558317 cloud-init[878]: The key fingerprint is:
Dec 13 01:42:16 np0005558317 cloud-init[878]: SHA256:GSQgTqH8TLCYUmB1S+je0NSUnMG2hLMJ9cqychnvHnQ root@np0005558317
Dec 13 01:42:16 np0005558317 cloud-init[878]: The key's randomart image is:
Dec 13 01:42:16 np0005558317 cloud-init[878]: +---[RSA 3072]----+
Dec 13 01:42:16 np0005558317 cloud-init[878]: |.+*o+=Bo=        |
Dec 13 01:42:16 np0005558317 cloud-init[878]: |+*o+o+o@         |
Dec 13 01:42:16 np0005558317 cloud-init[878]: |=oo.+.*.o        |
Dec 13 01:42:16 np0005558317 cloud-init[878]: |. +o.+.. o       |
Dec 13 01:42:16 np0005558317 cloud-init[878]: |  .=o+ ES        |
Dec 13 01:42:16 np0005558317 cloud-init[878]: |   .B..          |
Dec 13 01:42:16 np0005558317 cloud-init[878]: | . + o           |
Dec 13 01:42:16 np0005558317 cloud-init[878]: |  o . .          |
Dec 13 01:42:16 np0005558317 cloud-init[878]: |    .o           |
Dec 13 01:42:16 np0005558317 cloud-init[878]: +----[SHA256]-----+
Dec 13 01:42:16 np0005558317 cloud-init[878]: Generating public/private ecdsa key pair.
Dec 13 01:42:16 np0005558317 cloud-init[878]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec 13 01:42:16 np0005558317 cloud-init[878]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec 13 01:42:16 np0005558317 cloud-init[878]: The key fingerprint is:
Dec 13 01:42:16 np0005558317 cloud-init[878]: SHA256:ZhJNiM3whCkPRB3WQX/q5XpxOzDsefXdlDO7VjNeKk0 root@np0005558317
Dec 13 01:42:16 np0005558317 cloud-init[878]: The key's randomart image is:
Dec 13 01:42:16 np0005558317 cloud-init[878]: +---[ECDSA 256]---+
Dec 13 01:42:16 np0005558317 cloud-init[878]: | oo.+@+o.        |
Dec 13 01:42:16 np0005558317 cloud-init[878]: |  o.=o=+         |
Dec 13 01:42:16 np0005558317 cloud-init[878]: |   +  o o .      |
Dec 13 01:42:16 np0005558317 cloud-init[878]: |    .  . o       |
Dec 13 01:42:16 np0005558317 cloud-init[878]: |      . S..     .|
Dec 13 01:42:16 np0005558317 cloud-init[878]: |       = o= . E*+|
Dec 13 01:42:16 np0005558317 cloud-init[878]: |        ...* =.+@|
Dec 13 01:42:16 np0005558317 cloud-init[878]: |         .+ = o++|
Dec 13 01:42:16 np0005558317 cloud-init[878]: |        .. . o...|
Dec 13 01:42:16 np0005558317 cloud-init[878]: +----[SHA256]-----+
Dec 13 01:42:16 np0005558317 cloud-init[878]: Generating public/private ed25519 key pair.
Dec 13 01:42:16 np0005558317 cloud-init[878]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec 13 01:42:16 np0005558317 cloud-init[878]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec 13 01:42:16 np0005558317 cloud-init[878]: The key fingerprint is:
Dec 13 01:42:16 np0005558317 cloud-init[878]: SHA256:CGZApeQZ/l1aGDFrHyDPsKEwVnJrfSUR1oPeef/8Nyg root@np0005558317
Dec 13 01:42:16 np0005558317 cloud-init[878]: The key's randomart image is:
Dec 13 01:42:16 np0005558317 cloud-init[878]: +--[ED25519 256]--+
Dec 13 01:42:16 np0005558317 cloud-init[878]: |o+*+= =*=.       |
Dec 13 01:42:16 np0005558317 cloud-init[878]: |.*o*oB.Boo       |
Dec 13 01:42:16 np0005558317 cloud-init[878]: |  *o=.B.= o      |
Dec 13 01:42:16 np0005558317 cloud-init[878]: |  .+ +.B + .     |
Dec 13 01:42:16 np0005558317 cloud-init[878]: |    . + S . .    |
Dec 13 01:42:16 np0005558317 cloud-init[878]: |             .   |
Dec 13 01:42:16 np0005558317 cloud-init[878]: |              +  |
Dec 13 01:42:16 np0005558317 cloud-init[878]: |           E . +.|
Dec 13 01:42:16 np0005558317 cloud-init[878]: |            .   =|
Dec 13 01:42:16 np0005558317 cloud-init[878]: +----[SHA256]-----+
Dec 13 01:42:16 np0005558317 systemd[1]: Finished Cloud-init: Network Stage.
Dec 13 01:42:16 np0005558317 systemd[1]: Reached target Cloud-config availability.
Dec 13 01:42:16 np0005558317 systemd[1]: Reached target Network is Online.
Dec 13 01:42:16 np0005558317 systemd[1]: Starting Cloud-init: Config Stage...
Dec 13 01:42:16 np0005558317 systemd[1]: Starting Crash recovery kernel arming...
Dec 13 01:42:16 np0005558317 systemd[1]: Starting Notify NFS peers of a restart...
Dec 13 01:42:16 np0005558317 systemd[1]: Starting System Logging Service...
Dec 13 01:42:16 np0005558317 systemd[1]: Starting OpenSSH server daemon...
Dec 13 01:42:16 np0005558317 systemd[1]: Starting Permit User Sessions...
Dec 13 01:42:16 np0005558317 sm-notify[961]: Version 2.5.4 starting
Dec 13 01:42:16 np0005558317 systemd[1]: Started OpenSSH server daemon.
Dec 13 01:42:16 np0005558317 systemd[1]: Started Notify NFS peers of a restart.
Dec 13 01:42:16 np0005558317 systemd[1]: Finished Permit User Sessions.
Dec 13 01:42:16 np0005558317 rsyslogd[962]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="962" x-info="https://www.rsyslog.com"] start
Dec 13 01:42:16 np0005558317 rsyslogd[962]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec 13 01:42:16 np0005558317 systemd[1]: Started Command Scheduler.
Dec 13 01:42:16 np0005558317 systemd[1]: Started Getty on tty1.
Dec 13 01:42:16 np0005558317 systemd[1]: Started Serial Getty on ttyS0.
Dec 13 01:42:16 np0005558317 systemd[1]: Reached target Login Prompts.
Dec 13 01:42:16 np0005558317 systemd[1]: Started System Logging Service.
Dec 13 01:42:16 np0005558317 systemd[1]: Reached target Multi-User System.
Dec 13 01:42:16 np0005558317 systemd[1]: Starting Record Runlevel Change in UTMP...
Dec 13 01:42:17 np0005558317 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 13 01:42:17 np0005558317 systemd[1]: Finished Record Runlevel Change in UTMP.
Dec 13 01:42:17 np0005558317 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 01:42:17 np0005558317 kdumpctl[977]: kdump: No kdump initial ramdisk found.
Dec 13 01:42:17 np0005558317 kdumpctl[977]: kdump: Rebuilding /boot/initramfs-5.14.0-648.el9.x86_64kdump.img
Dec 13 01:42:17 np0005558317 cloud-init[1112]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 13 Dec 2025 06:42:17 +0000. Up 10.77 seconds.
Dec 13 01:42:17 np0005558317 systemd[1]: Finished Cloud-init: Config Stage.
Dec 13 01:42:17 np0005558317 systemd[1]: Starting Cloud-init: Final Stage...
Dec 13 01:42:17 np0005558317 dracut[1240]: dracut-057-102.git20250818.el9
Dec 13 01:42:17 np0005558317 cloud-init[1258]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 13 Dec 2025 06:42:17 +0000. Up 11.10 seconds.
Dec 13 01:42:17 np0005558317 dracut[1242]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-648.el9.x86_64kdump.img 5.14.0-648.el9.x86_64
Dec 13 01:42:17 np0005558317 cloud-init[1288]: #############################################################
Dec 13 01:42:17 np0005558317 cloud-init[1291]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec 13 01:42:17 np0005558317 cloud-init[1300]: 256 SHA256:ZhJNiM3whCkPRB3WQX/q5XpxOzDsefXdlDO7VjNeKk0 root@np0005558317 (ECDSA)
Dec 13 01:42:17 np0005558317 cloud-init[1308]: 256 SHA256:CGZApeQZ/l1aGDFrHyDPsKEwVnJrfSUR1oPeef/8Nyg root@np0005558317 (ED25519)
Dec 13 01:42:17 np0005558317 cloud-init[1312]: 3072 SHA256:GSQgTqH8TLCYUmB1S+je0NSUnMG2hLMJ9cqychnvHnQ root@np0005558317 (RSA)
Dec 13 01:42:17 np0005558317 cloud-init[1316]: -----END SSH HOST KEY FINGERPRINTS-----
Dec 13 01:42:17 np0005558317 cloud-init[1318]: #############################################################
Dec 13 01:42:17 np0005558317 cloud-init[1258]: Cloud-init v. 24.4-7.el9 finished at Sat, 13 Dec 2025 06:42:17 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.23 seconds
Dec 13 01:42:17 np0005558317 systemd[1]: Finished Cloud-init: Final Stage.
Dec 13 01:42:17 np0005558317 systemd[1]: Reached target Cloud-init target.
Dec 13 01:42:17 np0005558317 dracut[1242]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec 13 01:42:17 np0005558317 dracut[1242]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec 13 01:42:17 np0005558317 dracut[1242]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec 13 01:42:17 np0005558317 dracut[1242]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 13 01:42:17 np0005558317 dracut[1242]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 13 01:42:17 np0005558317 dracut[1242]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 13 01:42:17 np0005558317 dracut[1242]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 13 01:42:17 np0005558317 dracut[1242]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 13 01:42:17 np0005558317 dracut[1242]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 13 01:42:17 np0005558317 dracut[1242]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 13 01:42:17 np0005558317 dracut[1242]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 13 01:42:17 np0005558317 dracut[1242]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 13 01:42:17 np0005558317 dracut[1242]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 13 01:42:17 np0005558317 dracut[1242]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 13 01:42:17 np0005558317 dracut[1242]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 13 01:42:17 np0005558317 dracut[1242]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 13 01:42:17 np0005558317 dracut[1242]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 13 01:42:17 np0005558317 dracut[1242]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 13 01:42:17 np0005558317 dracut[1242]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 13 01:42:18 np0005558317 chronyd[754]: Selected source 162.159.200.123 (2.centos.pool.ntp.org)
Dec 13 01:42:18 np0005558317 chronyd[754]: System clock TAI offset set to 37 seconds
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: memstrack is not available
Dec 13 01:42:18 np0005558317 dracut[1242]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 13 01:42:18 np0005558317 dracut[1242]: memstrack is not available
Dec 13 01:42:18 np0005558317 dracut[1242]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 13 01:42:18 np0005558317 dracut[1242]: *** Including module: systemd ***
Dec 13 01:42:18 np0005558317 dracut[1242]: *** Including module: fips ***
Dec 13 01:42:18 np0005558317 dracut[1242]: *** Including module: systemd-initrd ***
Dec 13 01:42:18 np0005558317 dracut[1242]: *** Including module: i18n ***
Dec 13 01:42:19 np0005558317 dracut[1242]: *** Including module: drm ***
Dec 13 01:42:19 np0005558317 dracut[1242]: *** Including module: prefixdevname ***
Dec 13 01:42:19 np0005558317 dracut[1242]: *** Including module: kernel-modules ***
Dec 13 01:42:19 np0005558317 kernel: block vda: the capability attribute has been deprecated.
Dec 13 01:42:19 np0005558317 dracut[1242]: *** Including module: kernel-modules-extra ***
Dec 13 01:42:19 np0005558317 dracut[1242]: *** Including module: qemu ***
Dec 13 01:42:19 np0005558317 dracut[1242]: *** Including module: fstab-sys ***
Dec 13 01:42:19 np0005558317 dracut[1242]: *** Including module: rootfs-block ***
Dec 13 01:42:19 np0005558317 dracut[1242]: *** Including module: terminfo ***
Dec 13 01:42:19 np0005558317 dracut[1242]: *** Including module: udev-rules ***
Dec 13 01:42:20 np0005558317 dracut[1242]: Skipping udev rule: 91-permissions.rules
Dec 13 01:42:20 np0005558317 dracut[1242]: Skipping udev rule: 80-drivers-modprobe.rules
Dec 13 01:42:20 np0005558317 dracut[1242]: *** Including module: virtiofs ***
Dec 13 01:42:20 np0005558317 dracut[1242]: *** Including module: dracut-systemd ***
Dec 13 01:42:20 np0005558317 dracut[1242]: *** Including module: usrmount ***
Dec 13 01:42:20 np0005558317 dracut[1242]: *** Including module: base ***
Dec 13 01:42:20 np0005558317 dracut[1242]: *** Including module: fs-lib ***
Dec 13 01:42:20 np0005558317 dracut[1242]: *** Including module: kdumpbase ***
Dec 13 01:42:20 np0005558317 dracut[1242]: *** Including module: microcode_ctl-fw_dir_override ***
Dec 13 01:42:20 np0005558317 dracut[1242]:  microcode_ctl module: mangling fw_dir
Dec 13 01:42:20 np0005558317 dracut[1242]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec 13 01:42:20 np0005558317 dracut[1242]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec 13 01:42:20 np0005558317 dracut[1242]:    microcode_ctl: configuration "intel" is ignored
Dec 13 01:42:20 np0005558317 dracut[1242]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec 13 01:42:20 np0005558317 dracut[1242]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec 13 01:42:20 np0005558317 dracut[1242]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec 13 01:42:20 np0005558317 dracut[1242]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec 13 01:42:20 np0005558317 dracut[1242]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec 13 01:42:20 np0005558317 dracut[1242]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec 13 01:42:20 np0005558317 dracut[1242]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec 13 01:42:20 np0005558317 dracut[1242]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Dec 13 01:42:20 np0005558317 dracut[1242]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec 13 01:42:20 np0005558317 dracut[1242]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec 13 01:42:20 np0005558317 dracut[1242]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec 13 01:42:20 np0005558317 dracut[1242]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec 13 01:42:20 np0005558317 dracut[1242]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec 13 01:42:21 np0005558317 dracut[1242]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec 13 01:42:21 np0005558317 dracut[1242]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec 13 01:42:21 np0005558317 dracut[1242]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec 13 01:42:21 np0005558317 dracut[1242]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec 13 01:42:21 np0005558317 dracut[1242]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec 13 01:42:21 np0005558317 dracut[1242]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec 13 01:42:21 np0005558317 dracut[1242]: *** Including module: openssl ***
Dec 13 01:42:21 np0005558317 dracut[1242]: *** Including module: shutdown ***
Dec 13 01:42:21 np0005558317 dracut[1242]: *** Including module: squash ***
Dec 13 01:42:21 np0005558317 dracut[1242]: *** Including modules done ***
Dec 13 01:42:21 np0005558317 dracut[1242]: *** Installing kernel module dependencies ***
Dec 13 01:42:21 np0005558317 irqbalance[740]: Cannot change IRQ 45 affinity: Operation not permitted
Dec 13 01:42:21 np0005558317 irqbalance[740]: IRQ 45 affinity is now unmanaged
Dec 13 01:42:21 np0005558317 irqbalance[740]: Cannot change IRQ 48 affinity: Operation not permitted
Dec 13 01:42:21 np0005558317 irqbalance[740]: IRQ 48 affinity is now unmanaged
Dec 13 01:42:21 np0005558317 irqbalance[740]: Cannot change IRQ 46 affinity: Operation not permitted
Dec 13 01:42:21 np0005558317 irqbalance[740]: IRQ 46 affinity is now unmanaged
Dec 13 01:42:21 np0005558317 dracut[1242]: *** Installing kernel module dependencies done ***
Dec 13 01:42:21 np0005558317 dracut[1242]: *** Resolving executable dependencies ***
Dec 13 01:42:22 np0005558317 dracut[1242]: *** Resolving executable dependencies done ***
Dec 13 01:42:22 np0005558317 dracut[1242]: *** Generating early-microcode cpio image ***
Dec 13 01:42:22 np0005558317 dracut[1242]: *** Store current command line parameters ***
Dec 13 01:42:22 np0005558317 dracut[1242]: Stored kernel commandline:
Dec 13 01:42:22 np0005558317 dracut[1242]: No dracut internal kernel commandline stored in the initramfs
Dec 13 01:42:22 np0005558317 dracut[1242]: *** Install squash loader ***
Dec 13 01:42:23 np0005558317 dracut[1242]: *** Squashing the files inside the initramfs ***
Dec 13 01:42:25 np0005558317 dracut[1242]: *** Squashing the files inside the initramfs done ***
Dec 13 01:42:25 np0005558317 dracut[1242]: *** Creating image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' ***
Dec 13 01:42:25 np0005558317 dracut[1242]: *** Hardlinking files ***
Dec 13 01:42:25 np0005558317 dracut[1242]: *** Hardlinking files done ***
Dec 13 01:42:25 np0005558317 dracut[1242]: *** Creating initramfs image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' done ***
Dec 13 01:42:25 np0005558317 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 13 01:42:25 np0005558317 kdumpctl[977]: kdump: kexec: loaded kdump kernel
Dec 13 01:42:25 np0005558317 kdumpctl[977]: kdump: Starting kdump: [OK]
Dec 13 01:42:25 np0005558317 systemd[1]: Finished Crash recovery kernel arming.
Dec 13 01:42:25 np0005558317 systemd[1]: Startup finished in 1.353s (kernel) + 2.019s (initrd) + 15.962s (userspace) = 19.335s.
Dec 13 01:42:41 np0005558317 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 13 01:42:43 np0005558317 systemd-logind[745]: New session 1 of user zuul.
Dec 13 01:42:43 np0005558317 systemd[1]: Created slice User Slice of UID 1000.
Dec 13 01:42:43 np0005558317 systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec 13 01:42:43 np0005558317 systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec 13 01:42:43 np0005558317 systemd[1]: Starting User Manager for UID 1000...
Dec 13 01:42:43 np0005558317 systemd[4373]: Queued start job for default target Main User Target.
Dec 13 01:42:43 np0005558317 systemd[4373]: Created slice User Application Slice.
Dec 13 01:42:43 np0005558317 systemd[4373]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 13 01:42:43 np0005558317 systemd[4373]: Started Daily Cleanup of User's Temporary Directories.
Dec 13 01:42:43 np0005558317 systemd[4373]: Reached target Paths.
Dec 13 01:42:43 np0005558317 systemd[4373]: Reached target Timers.
Dec 13 01:42:43 np0005558317 systemd[4373]: Starting D-Bus User Message Bus Socket...
Dec 13 01:42:43 np0005558317 systemd[4373]: Starting Create User's Volatile Files and Directories...
Dec 13 01:42:43 np0005558317 systemd[4373]: Listening on D-Bus User Message Bus Socket.
Dec 13 01:42:43 np0005558317 systemd[4373]: Reached target Sockets.
Dec 13 01:42:43 np0005558317 systemd[4373]: Finished Create User's Volatile Files and Directories.
Dec 13 01:42:43 np0005558317 systemd[4373]: Reached target Basic System.
Dec 13 01:42:43 np0005558317 systemd[4373]: Reached target Main User Target.
Dec 13 01:42:43 np0005558317 systemd[4373]: Startup finished in 88ms.
Dec 13 01:42:43 np0005558317 systemd[1]: Started User Manager for UID 1000.
Dec 13 01:42:43 np0005558317 systemd[1]: Started Session 1 of User zuul.
Dec 13 01:42:44 np0005558317 python3[4455]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 01:42:46 np0005558317 python3[4483]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 01:42:50 np0005558317 python3[4537]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 01:42:51 np0005558317 python3[4577]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec 13 01:42:53 np0005558317 python3[4603]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDcfbFj32J6mpPMis8/nxcdxedFPWsAb48sQnk8/dqmA/0o7eojJNPvtwlioIjWQr/DJB6HjDPSB3NLuJPBSZnrNXlU85vHSy9U6+5lLAz6HZ28xMECtQqQv/iD4tkL7SwrX2dXIu5oOW+FtK2qFeV1Qkujl03B8H3B2uRtWqYL8/zyGhuKhpFnInzOT5JMJr/i5U3Q4mfai5xLM9Fx3245zOHWxY295NK9jkUWvOMnb9O6dcaPGBLsCrJVWkSIWQpHzO5mE+f3YYj4lohS2jaem9HJVWEs+lF7F+b1Eqcid6hw3yrM5FfemVQsE1x5kXbDueDke70soZK8MZDhM8hiX/3OY0csL75CZUeA0+Prard1EJKM0jZjvGkLPtA4/nsPY6CWE69HYvq4xsy8d0tGTHgIu//S8U/e0kkJZrqBCly1yR7a2GJdBckdXwHXdHr8vWYn3GkMhs5exnehoz4V/SMbrIaTHn4dTNqxxeoF7rmzY8Or/Sgprsq8anQjurs= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:42:53 np0005558317 python3[4627]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:42:53 np0005558317 python3[4726]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:42:53 np0005558317 python3[4797]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765608173.5330253-207-56333510274703/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=6d9437e7328943549472e8c958968345_id_rsa follow=False checksum=5f8b4b192d062490ef0af4c95496c5c1c0b5b3d0 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:42:54 np0005558317 python3[4920]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:42:54 np0005558317 python3[4991]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765608174.1583068-240-108821366010240/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=6d9437e7328943549472e8c958968345_id_rsa.pub follow=False checksum=8776d64955eefa7798ebe25237f59e8043a353da backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:42:55 np0005558317 python3[5039]: ansible-ping Invoked with data=pong
Dec 13 01:42:56 np0005558317 python3[5063]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 01:42:57 np0005558317 python3[5117]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec 13 01:42:58 np0005558317 python3[5149]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:42:58 np0005558317 python3[5173]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:42:59 np0005558317 python3[5197]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:42:59 np0005558317 python3[5221]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:42:59 np0005558317 python3[5245]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:42:59 np0005558317 python3[5269]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:43:01 np0005558317 python3[5295]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:43:01 np0005558317 irqbalance[740]: Cannot change IRQ 47 affinity: Operation not permitted
Dec 13 01:43:01 np0005558317 irqbalance[740]: IRQ 47 affinity is now unmanaged
Dec 13 01:43:01 np0005558317 python3[5373]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:43:01 np0005558317 python3[5446]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765608181.30276-21-226408615847484/source follow=False _original_basename=mirror_info.sh.j2 checksum=8d04605e615eb785450b583fc5efd2437794600d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:43:02 np0005558317 python3[5494]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:02 np0005558317 python3[5518]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:02 np0005558317 python3[5542]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:03 np0005558317 python3[5566]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:03 np0005558317 python3[5590]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:03 np0005558317 python3[5614]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:03 np0005558317 python3[5638]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:03 np0005558317 python3[5662]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:04 np0005558317 python3[5686]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:04 np0005558317 python3[5710]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:04 np0005558317 python3[5734]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:04 np0005558317 python3[5758]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:04 np0005558317 python3[5782]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:05 np0005558317 python3[5806]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:05 np0005558317 python3[5830]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:05 np0005558317 python3[5854]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:05 np0005558317 python3[5878]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:06 np0005558317 python3[5902]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:06 np0005558317 python3[5926]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:06 np0005558317 python3[5950]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:06 np0005558317 python3[5974]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:06 np0005558317 python3[5998]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:07 np0005558317 python3[6022]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:07 np0005558317 python3[6046]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:07 np0005558317 python3[6070]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:07 np0005558317 python3[6094]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:43:10 np0005558317 python3[6120]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 13 01:43:10 np0005558317 systemd[1]: Starting Time & Date Service...
Dec 13 01:43:10 np0005558317 systemd[1]: Started Time & Date Service.
Dec 13 01:43:10 np0005558317 systemd-timedated[6122]: Changed time zone to 'UTC' (UTC).
Dec 13 01:43:10 np0005558317 python3[6151]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:43:10 np0005558317 python3[6227]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:43:11 np0005558317 python3[6298]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1765608190.6503088-153-113752083553829/source _original_basename=tmp9jg7fw28 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:43:11 np0005558317 python3[6398]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:43:11 np0005558317 python3[6469]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765608191.2642102-183-59103226582882/source _original_basename=tmpt0s8wv1w follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:43:12 np0005558317 python3[6571]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:43:12 np0005558317 python3[6644]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765608192.0678525-231-72806642869882/source _original_basename=tmpkp90t9ad follow=False checksum=b24b3e02803cc66aa95d87527d84945a4821d184 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:43:12 np0005558317 python3[6692]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 01:43:13 np0005558317 python3[6718]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 01:43:13 np0005558317 python3[6798]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:43:13 np0005558317 python3[6871]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1765608193.2384923-273-263645010279144/source _original_basename=tmp2ycxx3ex follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:43:14 np0005558317 python3[6922]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e6f-3cad-0473-33bc-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 01:43:14 np0005558317 python3[6950]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e6f-3cad-0473-33bc-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec 13 01:43:15 np0005558317 python3[6978]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:43:39 np0005558317 python3[7004]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:43:40 np0005558317 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 13 01:44:02 np0005558317 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Dec 13 01:44:02 np0005558317 kernel: pci 0000:07:00.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec 13 01:44:02 np0005558317 kernel: pci 0000:07:00.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec 13 01:44:02 np0005558317 kernel: pci 0000:07:00.0: ROM [mem 0x00000000-0x0003ffff pref]
Dec 13 01:44:02 np0005558317 kernel: pci 0000:07:00.0: ROM [mem 0xfe000000-0xfe03ffff pref]: assigned
Dec 13 01:44:02 np0005558317 kernel: pci 0000:07:00.0: BAR 4 [mem 0xfb600000-0xfb603fff 64bit pref]: assigned
Dec 13 01:44:02 np0005558317 kernel: pci 0000:07:00.0: BAR 1 [mem 0xfe040000-0xfe040fff]: assigned
Dec 13 01:44:02 np0005558317 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002)
Dec 13 01:44:03 np0005558317 NetworkManager[812]: <info>  [1765608243.0012] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 13 01:44:03 np0005558317 systemd-udevd[7007]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 01:44:03 np0005558317 NetworkManager[812]: <info>  [1765608243.0291] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 01:44:03 np0005558317 NetworkManager[812]: <info>  [1765608243.0318] settings: (eth1): created default wired connection 'Wired connection 1'
Dec 13 01:44:03 np0005558317 NetworkManager[812]: <info>  [1765608243.0323] device (eth1): carrier: link connected
Dec 13 01:44:03 np0005558317 NetworkManager[812]: <info>  [1765608243.0326] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 13 01:44:03 np0005558317 NetworkManager[812]: <info>  [1765608243.0331] policy: auto-activating connection 'Wired connection 1' (4a989926-6152-3dd8-8a07-a3472614de2b)
Dec 13 01:44:03 np0005558317 NetworkManager[812]: <info>  [1765608243.0335] device (eth1): Activation: starting connection 'Wired connection 1' (4a989926-6152-3dd8-8a07-a3472614de2b)
Dec 13 01:44:03 np0005558317 NetworkManager[812]: <info>  [1765608243.0337] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 01:44:03 np0005558317 NetworkManager[812]: <info>  [1765608243.0339] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 01:44:03 np0005558317 NetworkManager[812]: <info>  [1765608243.0344] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 01:44:03 np0005558317 NetworkManager[812]: <info>  [1765608243.0349] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 13 01:44:03 np0005558317 python3[7034]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e6f-3cad-395d-e36d-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 01:44:13 np0005558317 python3[7114]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:44:13 np0005558317 python3[7187]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765608253.0308301-111-65194899838960/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=7f90c6f6ba2ee4a63d653631ac68d02d4cb966d3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:44:14 np0005558317 python3[7237]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 01:44:14 np0005558317 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 13 01:44:14 np0005558317 systemd[1]: Stopped Network Manager Wait Online.
Dec 13 01:44:14 np0005558317 systemd[1]: Stopping Network Manager Wait Online...
Dec 13 01:44:14 np0005558317 NetworkManager[812]: <info>  [1765608254.2068] caught SIGTERM, shutting down normally.
Dec 13 01:44:14 np0005558317 systemd[1]: Stopping Network Manager...
Dec 13 01:44:14 np0005558317 NetworkManager[812]: <info>  [1765608254.2075] dhcp4 (eth0): canceled DHCP transaction
Dec 13 01:44:14 np0005558317 NetworkManager[812]: <info>  [1765608254.2075] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 01:44:14 np0005558317 NetworkManager[812]: <info>  [1765608254.2075] dhcp4 (eth0): state changed no lease
Dec 13 01:44:14 np0005558317 NetworkManager[812]: <info>  [1765608254.2076] dhcp6 (eth0): canceled DHCP transaction
Dec 13 01:44:14 np0005558317 NetworkManager[812]: <info>  [1765608254.2076] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 01:44:14 np0005558317 NetworkManager[812]: <info>  [1765608254.2076] dhcp6 (eth0): state changed no lease
Dec 13 01:44:14 np0005558317 NetworkManager[812]: <info>  [1765608254.2078] manager: NetworkManager state is now CONNECTING
Dec 13 01:44:14 np0005558317 NetworkManager[812]: <info>  [1765608254.2216] dhcp4 (eth1): canceled DHCP transaction
Dec 13 01:44:14 np0005558317 NetworkManager[812]: <info>  [1765608254.2216] dhcp4 (eth1): state changed no lease
Dec 13 01:44:14 np0005558317 NetworkManager[812]: <info>  [1765608254.2236] exiting (success)
Dec 13 01:44:14 np0005558317 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 13 01:44:14 np0005558317 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 13 01:44:14 np0005558317 systemd[1]: Stopped Network Manager.
Dec 13 01:44:14 np0005558317 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 13 01:44:14 np0005558317 systemd[1]: Starting Network Manager...
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.2586] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:7e7986d9-0598-4067-a630-6e2fad28fcbc)
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.2588] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.2634] manager[0x555bfff26000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 13 01:44:14 np0005558317 systemd[1]: Starting Hostname Service...
Dec 13 01:44:14 np0005558317 systemd[1]: Started Hostname Service.
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3221] hostname: hostname: using hostnamed
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3222] hostname: static hostname changed from (none) to "np0005558317"
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3226] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3231] manager[0x555bfff26000]: rfkill: Wi-Fi hardware radio set enabled
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3232] manager[0x555bfff26000]: rfkill: WWAN hardware radio set enabled
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3257] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3258] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3259] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3260] manager: Networking is enabled by state file
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3262] settings: Loaded settings plugin: keyfile (internal)
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3266] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3284] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3294] dhcp: init: Using DHCP client 'internal'
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3297] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3302] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3307] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3314] device (lo): Activation: starting connection 'lo' (08c9145e-912a-4b86-86a1-5730fa82ae86)
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3321] device (eth0): carrier: link connected
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3325] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3330] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3331] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3336] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3341] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3347] device (eth1): carrier: link connected
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3351] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3354] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (4a989926-6152-3dd8-8a07-a3472614de2b) (indicated)
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3355] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3360] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3367] device (eth1): Activation: starting connection 'Wired connection 1' (4a989926-6152-3dd8-8a07-a3472614de2b)
Dec 13 01:44:14 np0005558317 systemd[1]: Started Network Manager.
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3373] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3377] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3378] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3379] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3381] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3383] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3385] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3388] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3390] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3397] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3400] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3403] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3408] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3413] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3419] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3428] dhcp4 (eth0): state changed new lease, address=192.168.25.195
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3437] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 13 01:44:14 np0005558317 systemd[1]: Starting Network Manager Wait Online...
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3463] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3467] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 13 01:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3470] device (lo): Activation: successful, device activated.
Dec 13 01:44:14 np0005558317 python3[7309]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e6f-3cad-395d-e36d-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 01:44:15 np0005558317 NetworkManager[7245]: <info>  [1765608255.3893] dhcp6 (eth0): state changed new lease, address=2001:db8::1cf
Dec 13 01:44:15 np0005558317 NetworkManager[7245]: <info>  [1765608255.3903] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 13 01:44:15 np0005558317 NetworkManager[7245]: <info>  [1765608255.3935] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 13 01:44:15 np0005558317 NetworkManager[7245]: <info>  [1765608255.3936] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 13 01:44:15 np0005558317 NetworkManager[7245]: <info>  [1765608255.3939] manager: NetworkManager state is now CONNECTED_SITE
Dec 13 01:44:15 np0005558317 NetworkManager[7245]: <info>  [1765608255.3940] device (eth0): Activation: successful, device activated.
Dec 13 01:44:15 np0005558317 NetworkManager[7245]: <info>  [1765608255.3944] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 13 01:44:25 np0005558317 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 13 01:44:44 np0005558317 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 13 01:44:54 np0005558317 python3[7410]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:44:54 np0005558317 python3[7483]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765608294.0741224-273-27766293396627/source _original_basename=tmpb2ss1gkr follow=False checksum=480db894146ef2cc1376d935191c022003cc0988 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4103] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 13 01:44:59 np0005558317 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 13 01:44:59 np0005558317 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 13 01:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4292] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 13 01:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4295] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 13 01:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4301] device (eth1): Activation: successful, device activated.
Dec 13 01:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4306] manager: startup complete
Dec 13 01:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4308] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec 13 01:44:59 np0005558317 NetworkManager[7245]: <warn>  [1765608299.4315] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec 13 01:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4321] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec 13 01:44:59 np0005558317 systemd[1]: Finished Network Manager Wait Online.
Dec 13 01:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4425] dhcp4 (eth1): canceled DHCP transaction
Dec 13 01:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4425] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 13 01:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4425] dhcp4 (eth1): state changed no lease
Dec 13 01:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4436] policy: auto-activating connection 'ci-private-network' (45fd5f29-c067-5e65-8f11-dae4f04176a0)
Dec 13 01:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4439] device (eth1): Activation: starting connection 'ci-private-network' (45fd5f29-c067-5e65-8f11-dae4f04176a0)
Dec 13 01:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4440] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 01:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4442] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 01:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4449] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 01:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4456] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 01:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4483] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 01:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4485] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 01:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4488] device (eth1): Activation: successful, device activated.
Dec 13 01:45:09 np0005558317 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 13 01:45:37 np0005558317 systemd[4373]: Starting Mark boot as successful...
Dec 13 01:45:37 np0005558317 systemd[4373]: Finished Mark boot as successful.
Dec 13 01:45:54 np0005558317 systemd-logind[745]: Session 1 logged out. Waiting for processes to exit.
Dec 13 01:48:37 np0005558317 systemd[4373]: Created slice User Background Tasks Slice.
Dec 13 01:48:37 np0005558317 systemd[4373]: Starting Cleanup of User's Temporary Files and Directories...
Dec 13 01:48:37 np0005558317 systemd[4373]: Finished Cleanup of User's Temporary Files and Directories.
Dec 13 01:50:12 np0005558317 systemd-logind[745]: New session 3 of user zuul.
Dec 13 01:50:12 np0005558317 systemd[1]: Started Session 3 of User zuul.
Dec 13 01:50:12 np0005558317 python3[7564]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e6f-3cad-84d4-1fbf-000000001f5b-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 01:50:12 np0005558317 python3[7593]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:50:13 np0005558317 python3[7619]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:50:13 np0005558317 python3[7645]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:50:13 np0005558317 python3[7671]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:50:13 np0005558317 python3[7697]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:50:14 np0005558317 python3[7775]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:50:14 np0005558317 python3[7848]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765608614.028469-475-277155668861196/source _original_basename=tmp1f8w1sqh follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:50:15 np0005558317 python3[7898]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 01:50:15 np0005558317 systemd[1]: Reloading.
Dec 13 01:50:15 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 01:50:16 np0005558317 python3[7953]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec 13 01:50:16 np0005558317 python3[7979]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 01:50:16 np0005558317 python3[8007]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 01:50:17 np0005558317 python3[8035]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 01:50:17 np0005558317 python3[8063]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 01:50:17 np0005558317 python3[8090]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e6f-3cad-84d4-1fbf-000000001f62-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 01:50:18 np0005558317 python3[8120]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 01:50:20 np0005558317 systemd[1]: session-3.scope: Deactivated successfully.
Dec 13 01:50:20 np0005558317 systemd[1]: session-3.scope: Consumed 3.129s CPU time.
Dec 13 01:50:20 np0005558317 systemd-logind[745]: Session 3 logged out. Waiting for processes to exit.
Dec 13 01:50:20 np0005558317 systemd-logind[745]: Removed session 3.
Dec 13 01:50:22 np0005558317 systemd-logind[745]: New session 4 of user zuul.
Dec 13 01:50:22 np0005558317 systemd[1]: Started Session 4 of User zuul.
Dec 13 01:50:22 np0005558317 python3[8154]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 13 01:50:36 np0005558317 kernel: SELinux:  Converting 386 SID table entries...
Dec 13 01:50:36 np0005558317 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 01:50:36 np0005558317 kernel: SELinux:  policy capability open_perms=1
Dec 13 01:50:36 np0005558317 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 01:50:36 np0005558317 kernel: SELinux:  policy capability always_check_network=0
Dec 13 01:50:36 np0005558317 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 01:50:36 np0005558317 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 01:50:36 np0005558317 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 01:50:43 np0005558317 kernel: SELinux:  Converting 386 SID table entries...
Dec 13 01:50:43 np0005558317 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 01:50:43 np0005558317 kernel: SELinux:  policy capability open_perms=1
Dec 13 01:50:43 np0005558317 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 01:50:43 np0005558317 kernel: SELinux:  policy capability always_check_network=0
Dec 13 01:50:43 np0005558317 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 01:50:43 np0005558317 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 01:50:43 np0005558317 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 01:50:50 np0005558317 kernel: SELinux:  Converting 386 SID table entries...
Dec 13 01:50:50 np0005558317 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 01:50:50 np0005558317 kernel: SELinux:  policy capability open_perms=1
Dec 13 01:50:50 np0005558317 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 01:50:50 np0005558317 kernel: SELinux:  policy capability always_check_network=0
Dec 13 01:50:50 np0005558317 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 01:50:50 np0005558317 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 01:50:50 np0005558317 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 01:50:51 np0005558317 setsebool[8222]: The virt_use_nfs policy boolean was changed to 1 by root
Dec 13 01:50:51 np0005558317 setsebool[8222]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec 13 01:51:00 np0005558317 kernel: SELinux:  Converting 389 SID table entries...
Dec 13 01:51:00 np0005558317 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 01:51:00 np0005558317 kernel: SELinux:  policy capability open_perms=1
Dec 13 01:51:00 np0005558317 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 01:51:00 np0005558317 kernel: SELinux:  policy capability always_check_network=0
Dec 13 01:51:00 np0005558317 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 01:51:00 np0005558317 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 01:51:00 np0005558317 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 01:51:12 np0005558317 dbus-broker-launch[734]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 13 01:51:12 np0005558317 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 01:51:12 np0005558317 systemd[1]: Starting man-db-cache-update.service...
Dec 13 01:51:12 np0005558317 systemd[1]: Reloading.
Dec 13 01:51:12 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 01:51:12 np0005558317 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 01:51:15 np0005558317 python3[13544]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e6f-3cad-55e7-0116-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 01:51:16 np0005558317 kernel: evm: overlay not supported
Dec 13 01:51:16 np0005558317 systemd[4373]: Starting D-Bus User Message Bus...
Dec 13 01:51:16 np0005558317 dbus-broker-launch[14044]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 13 01:51:16 np0005558317 dbus-broker-launch[14044]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 13 01:51:16 np0005558317 systemd[4373]: Started D-Bus User Message Bus.
Dec 13 01:51:16 np0005558317 dbus-broker-lau[14044]: Ready
Dec 13 01:51:16 np0005558317 systemd[4373]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 13 01:51:16 np0005558317 systemd[4373]: Created slice Slice /user.
Dec 13 01:51:16 np0005558317 systemd[4373]: podman-14022.scope: unit configures an IP firewall, but not running as root.
Dec 13 01:51:16 np0005558317 systemd[4373]: (This warning is only shown for the first unit using IP firewalling.)
Dec 13 01:51:16 np0005558317 systemd[4373]: Started podman-14022.scope.
Dec 13 01:51:17 np0005558317 systemd[4373]: Started podman-pause-1c7804ac.scope.
Dec 13 01:51:17 np0005558317 python3[14945]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.129.56.153:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.129.56.153:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:51:17 np0005558317 python3[14945]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec 13 01:51:18 np0005558317 systemd-logind[745]: Session 4 logged out. Waiting for processes to exit.
Dec 13 01:51:18 np0005558317 systemd[1]: session-4.scope: Deactivated successfully.
Dec 13 01:51:18 np0005558317 systemd[1]: session-4.scope: Consumed 45.962s CPU time.
Dec 13 01:51:18 np0005558317 systemd-logind[745]: Removed session 4.
Dec 13 01:51:35 np0005558317 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 01:51:35 np0005558317 systemd[1]: Finished man-db-cache-update.service.
Dec 13 01:51:35 np0005558317 systemd[1]: man-db-cache-update.service: Consumed 27.872s CPU time.
Dec 13 01:51:35 np0005558317 systemd[1]: run-r0cd49d91c2404a5483104891b3245523.service: Deactivated successfully.
Dec 13 01:51:42 np0005558317 systemd-logind[745]: New session 5 of user zuul.
Dec 13 01:51:42 np0005558317 systemd[1]: Started Session 5 of User zuul.
Dec 13 01:51:42 np0005558317 python3[29669]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJnHqEnifdMogqe2koi3kf/8MOGYg7doIt/u/Zi+s0YfOOikjBYd243liAj5ighEVQqRN5syYt1lhjz20ZhMgK4= zuul@np0005558316#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:51:43 np0005558317 python3[29695]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJnHqEnifdMogqe2koi3kf/8MOGYg7doIt/u/Zi+s0YfOOikjBYd243liAj5ighEVQqRN5syYt1lhjz20ZhMgK4= zuul@np0005558316#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:51:44 np0005558317 python3[29721]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005558317 update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 13 01:51:44 np0005558317 python3[29755]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJnHqEnifdMogqe2koi3kf/8MOGYg7doIt/u/Zi+s0YfOOikjBYd243liAj5ighEVQqRN5syYt1lhjz20ZhMgK4= zuul@np0005558316#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:51:44 np0005558317 python3[29833]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:51:44 np0005558317 python3[29906]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765608704.4171355-137-252240559304452/source _original_basename=tmp_boh_12k follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:51:45 np0005558317 python3[29956]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec 13 01:51:45 np0005558317 systemd[1]: Starting Hostname Service...
Dec 13 01:51:45 np0005558317 systemd[1]: Started Hostname Service.
Dec 13 01:51:45 np0005558317 systemd-hostnamed[29960]: Changed pretty hostname to 'compute-0'
Dec 13 01:51:45 np0005558317 systemd-hostnamed[29960]: Hostname set to <compute-0> (static)
Dec 13 01:51:45 np0005558317 NetworkManager[7245]: <info>  [1765608705.6635] hostname: static hostname changed from "np0005558317" to "compute-0"
Dec 13 01:51:45 np0005558317 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 13 01:51:45 np0005558317 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 13 01:51:45 np0005558317 systemd[1]: session-5.scope: Deactivated successfully.
Dec 13 01:51:45 np0005558317 systemd[1]: session-5.scope: Consumed 1.641s CPU time.
Dec 13 01:51:45 np0005558317 systemd-logind[745]: Session 5 logged out. Waiting for processes to exit.
Dec 13 01:51:45 np0005558317 systemd-logind[745]: Removed session 5.
Dec 13 01:51:55 np0005558317 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 13 01:52:15 np0005558317 systemd[1]: Starting dnf makecache...
Dec 13 01:52:15 np0005558317 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 13 01:52:15 np0005558317 dnf[29973]: Failed determining last makecache time.
Dec 13 01:52:17 np0005558317 dnf[29973]: CentOS Stream 9 - BaseOS                        5.1 kB/s | 7.3 kB     00:01
Dec 13 01:52:19 np0005558317 dnf[29973]: CentOS Stream 9 - AppStream                     3.4 kB/s | 7.8 kB     00:02
Dec 13 01:52:20 np0005558317 dnf[29973]: CentOS Stream 9 - CRB                            17 kB/s | 7.2 kB     00:00
Dec 13 01:52:21 np0005558317 dnf[29973]: CentOS Stream 9 - Extras packages               8.3 kB/s | 8.3 kB     00:00
Dec 13 01:52:21 np0005558317 dnf[29973]: Metadata cache created.
Dec 13 01:52:21 np0005558317 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 13 01:52:21 np0005558317 systemd[1]: Finished dnf makecache.
Dec 13 01:55:34 np0005558317 systemd-logind[745]: New session 6 of user zuul.
Dec 13 01:55:34 np0005558317 systemd[1]: Started Session 6 of User zuul.
Dec 13 01:55:34 np0005558317 python3[30061]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 01:55:36 np0005558317 python3[30173]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:55:36 np0005558317 python3[30246]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765608935.8850853-33999-143407617000061/source mode=0755 _original_basename=delorean.repo follow=False checksum=619eee7d4b000c2fdbd89639e9af5cd9cd1e4284 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:55:36 np0005558317 python3[30272]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:55:36 np0005558317 python3[30345]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765608935.8850853-33999-143407617000061/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=32cab4d7d3069e03e1e375a1684f22cb2eb72603 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:55:37 np0005558317 python3[30371]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:55:37 np0005558317 python3[30444]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765608935.8850853-33999-143407617000061/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=5c739387d960f7119f9d22475c90dcd56f13e885 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:55:37 np0005558317 python3[30470]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:55:37 np0005558317 python3[30543]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765608935.8850853-33999-143407617000061/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=8c00581855ef07972e002c82cc33b7b03ecccc44 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:55:37 np0005558317 python3[30569]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:55:38 np0005558317 python3[30642]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765608935.8850853-33999-143407617000061/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=5515871802d2268513e691cf460c59c7da7132f9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:55:38 np0005558317 python3[30668]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:55:38 np0005558317 python3[30741]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765608935.8850853-33999-143407617000061/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=c87c0371a768c46886c8904021e8b85df789a625 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:55:38 np0005558317 python3[30767]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:55:39 np0005558317 python3[30840]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765608935.8850853-33999-143407617000061/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:55:51 np0005558317 python3[30898]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 01:57:37 np0005558317 systemd[1]: Starting Cleanup of Temporary Directories...
Dec 13 01:57:37 np0005558317 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec 13 01:57:37 np0005558317 systemd[1]: Finished Cleanup of Temporary Directories.
Dec 13 01:57:37 np0005558317 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec 13 02:00:50 np0005558317 systemd[1]: session-6.scope: Deactivated successfully.
Dec 13 02:00:50 np0005558317 systemd[1]: session-6.scope: Consumed 3.523s CPU time.
Dec 13 02:00:50 np0005558317 systemd-logind[745]: Session 6 logged out. Waiting for processes to exit.
Dec 13 02:00:50 np0005558317 systemd-logind[745]: Removed session 6.
Dec 13 02:05:42 np0005558317 systemd-logind[745]: New session 7 of user zuul.
Dec 13 02:05:42 np0005558317 systemd[1]: Started Session 7 of User zuul.
Dec 13 02:05:42 np0005558317 python3.9[31072]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:05:43 np0005558317 python3.9[31253]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:05:52 np0005558317 systemd[1]: session-7.scope: Deactivated successfully.
Dec 13 02:05:52 np0005558317 systemd[1]: session-7.scope: Consumed 6.281s CPU time.
Dec 13 02:05:52 np0005558317 systemd-logind[745]: Session 7 logged out. Waiting for processes to exit.
Dec 13 02:05:52 np0005558317 systemd-logind[745]: Removed session 7.
Dec 13 02:06:08 np0005558317 systemd-logind[745]: New session 8 of user zuul.
Dec 13 02:06:08 np0005558317 systemd[1]: Started Session 8 of User zuul.
Dec 13 02:06:08 np0005558317 python3.9[31463]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 13 02:06:09 np0005558317 python3.9[31637]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:06:10 np0005558317 python3.9[31790]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:06:11 np0005558317 python3.9[31943]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:06:11 np0005558317 python3.9[32095]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:06:12 np0005558317 python3.9[32247]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:06:12 np0005558317 python3.9[32370]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765609571.6877418-73-52532129282652/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:06:13 np0005558317 python3.9[32522]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:06:13 np0005558317 python3.9[32678]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:06:14 np0005558317 python3.9[32830]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:06:14 np0005558317 python3.9[32980]: ansible-ansible.builtin.service_facts Invoked
Dec 13 02:06:16 np0005558317 python3.9[33233]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:06:17 np0005558317 python3.9[33383]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:06:18 np0005558317 python3.9[33537]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:06:19 np0005558317 python3.9[33695]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:06:19 np0005558317 python3.9[33779]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:07:55 np0005558317 systemd[1]: Reloading.
Dec 13 02:07:55 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:07:56 np0005558317 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec 13 02:07:56 np0005558317 systemd[1]: Reloading.
Dec 13 02:07:56 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:07:56 np0005558317 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec 13 02:07:56 np0005558317 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec 13 02:07:56 np0005558317 systemd[1]: Reloading.
Dec 13 02:07:56 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:07:56 np0005558317 systemd[1]: Listening on LVM2 poll daemon socket.
Dec 13 02:07:56 np0005558317 dbus-broker-launch[727]: Noticed file-system modification, trigger reload.
Dec 13 02:07:56 np0005558317 dbus-broker-launch[727]: Noticed file-system modification, trigger reload.
Dec 13 02:07:56 np0005558317 dbus-broker-launch[727]: Noticed file-system modification, trigger reload.
Dec 13 02:08:41 np0005558317 kernel: SELinux:  Converting 2719 SID table entries...
Dec 13 02:08:41 np0005558317 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 02:08:41 np0005558317 kernel: SELinux:  policy capability open_perms=1
Dec 13 02:08:41 np0005558317 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 02:08:41 np0005558317 kernel: SELinux:  policy capability always_check_network=0
Dec 13 02:08:41 np0005558317 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 02:08:41 np0005558317 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 02:08:41 np0005558317 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 02:08:41 np0005558317 dbus-broker-launch[734]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec 13 02:08:41 np0005558317 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 02:08:41 np0005558317 systemd[1]: Starting man-db-cache-update.service...
Dec 13 02:08:41 np0005558317 systemd[1]: Reloading.
Dec 13 02:08:41 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:08:41 np0005558317 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 02:08:42 np0005558317 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 02:08:42 np0005558317 systemd[1]: Finished man-db-cache-update.service.
Dec 13 02:08:42 np0005558317 systemd[1]: run-r4e83c113347a4f54ab484307fa3d7a79.service: Deactivated successfully.
Dec 13 02:08:42 np0005558317 python3.9[35286]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:08:44 np0005558317 python3.9[35567]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 13 02:08:44 np0005558317 python3.9[35719]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 13 02:08:46 np0005558317 python3.9[35872]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:08:47 np0005558317 python3.9[36024]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 13 02:08:48 np0005558317 python3.9[36176]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:08:48 np0005558317 python3.9[36328]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:08:48 np0005558317 python3.9[36451]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609728.1732094-236-97114366218405/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac04d306f192c0875048c78c53711957498c3ede backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:08:49 np0005558317 python3.9[36603]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:08:50 np0005558317 python3.9[36755]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:08:50 np0005558317 python3.9[36908]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:08:53 np0005558317 python3.9[37060]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 13 02:08:53 np0005558317 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 02:08:53 np0005558317 python3.9[37214]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 13 02:08:54 np0005558317 python3.9[37372]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 13 02:08:54 np0005558317 python3.9[37532]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 13 02:08:55 np0005558317 python3.9[37685]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 13 02:08:55 np0005558317 python3.9[37843]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 13 02:08:56 np0005558317 python3.9[37995]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:08:58 np0005558317 python3.9[38148]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:08:58 np0005558317 python3.9[38300]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:08:59 np0005558317 python3.9[38423]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765609738.3757422-355-129980387542290/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:08:59 np0005558317 python3.9[38575]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:08:59 np0005558317 systemd[1]: Starting Load Kernel Modules...
Dec 13 02:09:00 np0005558317 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 13 02:09:00 np0005558317 kernel: Bridge firewalling registered
Dec 13 02:09:00 np0005558317 systemd-modules-load[38579]: Inserted module 'br_netfilter'
Dec 13 02:09:00 np0005558317 systemd[1]: Finished Load Kernel Modules.
Dec 13 02:09:00 np0005558317 python3.9[38735]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:09:00 np0005558317 python3.9[38858]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765609740.1776235-378-59125076682533/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:09:01 np0005558317 python3.9[39010]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:09:07 np0005558317 dbus-broker-launch[727]: Noticed file-system modification, trigger reload.
Dec 13 02:09:07 np0005558317 dbus-broker-launch[727]: Noticed file-system modification, trigger reload.
Dec 13 02:09:07 np0005558317 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 02:09:07 np0005558317 systemd[1]: Starting man-db-cache-update.service...
Dec 13 02:09:07 np0005558317 systemd[1]: Reloading.
Dec 13 02:09:08 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:09:08 np0005558317 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 02:09:09 np0005558317 python3.9[40256]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:09:09 np0005558317 python3.9[41371]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 13 02:09:10 np0005558317 python3.9[42169]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:09:10 np0005558317 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 02:09:10 np0005558317 systemd[1]: Finished man-db-cache-update.service.
Dec 13 02:09:10 np0005558317 systemd[1]: man-db-cache-update.service: Consumed 3.229s CPU time.
Dec 13 02:09:10 np0005558317 systemd[1]: run-r10fa14ae4b894295b2e4cb6242b36709.service: Deactivated successfully.
Dec 13 02:09:10 np0005558317 python3.9[43128]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:09:10 np0005558317 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 13 02:09:10 np0005558317 systemd[1]: Starting Authorization Manager...
Dec 13 02:09:11 np0005558317 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 13 02:09:11 np0005558317 polkitd[43388]: Started polkitd version 0.117
Dec 13 02:09:11 np0005558317 systemd[1]: Started Authorization Manager.
Dec 13 02:09:11 np0005558317 python3.9[43554]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:09:11 np0005558317 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 13 02:09:11 np0005558317 systemd[1]: tuned.service: Deactivated successfully.
Dec 13 02:09:11 np0005558317 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 13 02:09:11 np0005558317 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 13 02:09:12 np0005558317 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 13 02:09:12 np0005558317 python3.9[43715]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 13 02:09:14 np0005558317 python3.9[43867]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:09:14 np0005558317 systemd[1]: Reloading.
Dec 13 02:09:14 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:09:14 np0005558317 python3.9[44057]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:09:15 np0005558317 systemd[1]: Reloading.
Dec 13 02:09:15 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:09:15 np0005558317 python3.9[44246]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:09:16 np0005558317 python3.9[44399]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:09:16 np0005558317 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec 13 02:09:16 np0005558317 python3.9[44552]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:09:18 np0005558317 python3.9[44714]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:09:18 np0005558317 python3.9[44867]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:09:18 np0005558317 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 13 02:09:18 np0005558317 systemd[1]: Stopped Apply Kernel Variables.
Dec 13 02:09:18 np0005558317 systemd[1]: Stopping Apply Kernel Variables...
Dec 13 02:09:19 np0005558317 systemd[1]: Starting Apply Kernel Variables...
Dec 13 02:09:19 np0005558317 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 13 02:09:19 np0005558317 systemd[1]: Finished Apply Kernel Variables.
Dec 13 02:09:19 np0005558317 systemd[1]: session-8.scope: Deactivated successfully.
Dec 13 02:09:19 np0005558317 systemd[1]: session-8.scope: Consumed 1min 41.309s CPU time.
Dec 13 02:09:19 np0005558317 systemd-logind[745]: Session 8 logged out. Waiting for processes to exit.
Dec 13 02:09:19 np0005558317 systemd-logind[745]: Removed session 8.
Dec 13 02:09:24 np0005558317 systemd-logind[745]: New session 9 of user zuul.
Dec 13 02:09:24 np0005558317 systemd[1]: Started Session 9 of User zuul.
Dec 13 02:09:25 np0005558317 python3.9[45051]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:09:26 np0005558317 python3.9[45207]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 13 02:09:26 np0005558317 python3.9[45360]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 13 02:09:27 np0005558317 python3.9[45518]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 13 02:09:28 np0005558317 python3.9[45678]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:09:28 np0005558317 python3.9[45762]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 13 02:09:35 np0005558317 python3.9[45928]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:09:44 np0005558317 kernel: SELinux:  Converting 2731 SID table entries...
Dec 13 02:09:44 np0005558317 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 02:09:44 np0005558317 kernel: SELinux:  policy capability open_perms=1
Dec 13 02:09:44 np0005558317 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 02:09:44 np0005558317 kernel: SELinux:  policy capability always_check_network=0
Dec 13 02:09:44 np0005558317 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 02:09:44 np0005558317 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 02:09:44 np0005558317 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 02:09:44 np0005558317 dbus-broker-launch[734]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec 13 02:09:44 np0005558317 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec 13 02:09:45 np0005558317 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 02:09:45 np0005558317 systemd[1]: Starting man-db-cache-update.service...
Dec 13 02:09:45 np0005558317 systemd[1]: Reloading.
Dec 13 02:09:45 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:09:45 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:09:45 np0005558317 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 02:09:46 np0005558317 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 02:09:46 np0005558317 systemd[1]: Finished man-db-cache-update.service.
Dec 13 02:09:46 np0005558317 systemd[1]: run-r6c88459f782f4b73970d8fe064708acb.service: Deactivated successfully.
Dec 13 02:09:47 np0005558317 python3.9[47025]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 02:09:47 np0005558317 systemd[1]: Reloading.
Dec 13 02:09:47 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:09:47 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:09:47 np0005558317 systemd[1]: Starting Open vSwitch Database Unit...
Dec 13 02:09:47 np0005558317 chown[47067]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec 13 02:09:47 np0005558317 ovs-ctl[47072]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec 13 02:09:47 np0005558317 ovs-ctl[47072]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec 13 02:09:47 np0005558317 ovs-ctl[47072]: Starting ovsdb-server [  OK  ]
Dec 13 02:09:47 np0005558317 ovs-vsctl[47121]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec 13 02:09:47 np0005558317 ovs-vsctl[47141]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"075cc82e-193d-47f2-a248-9917472f5475\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec 13 02:09:47 np0005558317 ovs-ctl[47072]: Configuring Open vSwitch system IDs [  OK  ]
Dec 13 02:09:47 np0005558317 ovs-ctl[47072]: Enabling remote OVSDB managers [  OK  ]
Dec 13 02:09:47 np0005558317 systemd[1]: Started Open vSwitch Database Unit.
Dec 13 02:09:47 np0005558317 ovs-vsctl[47147]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 13 02:09:47 np0005558317 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec 13 02:09:47 np0005558317 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec 13 02:09:47 np0005558317 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec 13 02:09:47 np0005558317 kernel: openvswitch: Open vSwitch switching datapath
Dec 13 02:09:47 np0005558317 ovs-ctl[47191]: Inserting openvswitch module [  OK  ]
Dec 13 02:09:47 np0005558317 ovs-ctl[47160]: Starting ovs-vswitchd [  OK  ]
Dec 13 02:09:47 np0005558317 ovs-ctl[47160]: Enabling remote OVSDB managers [  OK  ]
Dec 13 02:09:47 np0005558317 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec 13 02:09:47 np0005558317 ovs-vsctl[47209]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 13 02:09:47 np0005558317 systemd[1]: Starting Open vSwitch...
Dec 13 02:09:47 np0005558317 systemd[1]: Finished Open vSwitch.
Dec 13 02:09:48 np0005558317 python3.9[47360]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:09:49 np0005558317 python3.9[47512]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 13 02:09:49 np0005558317 kernel: SELinux:  Converting 2745 SID table entries...
Dec 13 02:09:49 np0005558317 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 02:09:49 np0005558317 kernel: SELinux:  policy capability open_perms=1
Dec 13 02:09:49 np0005558317 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 02:09:49 np0005558317 kernel: SELinux:  policy capability always_check_network=0
Dec 13 02:09:49 np0005558317 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 02:09:49 np0005558317 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 02:09:49 np0005558317 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 02:09:50 np0005558317 python3.9[47667]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:09:51 np0005558317 dbus-broker-launch[734]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec 13 02:09:51 np0005558317 python3.9[47825]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:09:52 np0005558317 python3.9[47978]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:09:54 np0005558317 python3.9[48265]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 13 02:09:54 np0005558317 python3.9[48415]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:09:55 np0005558317 python3.9[48569]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:09:58 np0005558317 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 02:09:58 np0005558317 systemd[1]: Starting man-db-cache-update.service...
Dec 13 02:09:58 np0005558317 systemd[1]: Reloading.
Dec 13 02:09:58 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:09:58 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:09:58 np0005558317 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 02:09:59 np0005558317 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 02:09:59 np0005558317 systemd[1]: Finished man-db-cache-update.service.
Dec 13 02:09:59 np0005558317 systemd[1]: run-r4d5decbe51dd49cbb85d3151aaad4b2d.service: Deactivated successfully.
Dec 13 02:09:59 np0005558317 python3.9[48885]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:09:59 np0005558317 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 13 02:09:59 np0005558317 systemd[1]: Stopped Network Manager Wait Online.
Dec 13 02:09:59 np0005558317 systemd[1]: Stopping Network Manager Wait Online...
Dec 13 02:09:59 np0005558317 systemd[1]: Stopping Network Manager...
Dec 13 02:09:59 np0005558317 NetworkManager[7245]: <info>  [1765609799.7463] caught SIGTERM, shutting down normally.
Dec 13 02:09:59 np0005558317 NetworkManager[7245]: <info>  [1765609799.7476] dhcp4 (eth0): canceled DHCP transaction
Dec 13 02:09:59 np0005558317 NetworkManager[7245]: <info>  [1765609799.7476] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 02:09:59 np0005558317 NetworkManager[7245]: <info>  [1765609799.7476] dhcp4 (eth0): state changed no lease
Dec 13 02:09:59 np0005558317 NetworkManager[7245]: <info>  [1765609799.7478] dhcp6 (eth0): canceled DHCP transaction
Dec 13 02:09:59 np0005558317 NetworkManager[7245]: <info>  [1765609799.7478] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 02:09:59 np0005558317 NetworkManager[7245]: <info>  [1765609799.7478] dhcp6 (eth0): state changed no lease
Dec 13 02:09:59 np0005558317 NetworkManager[7245]: <info>  [1765609799.7480] manager: NetworkManager state is now CONNECTED_SITE
Dec 13 02:09:59 np0005558317 NetworkManager[7245]: <info>  [1765609799.7504] exiting (success)
Dec 13 02:09:59 np0005558317 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 13 02:09:59 np0005558317 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 13 02:09:59 np0005558317 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 13 02:09:59 np0005558317 systemd[1]: Stopped Network Manager.
Dec 13 02:09:59 np0005558317 systemd[1]: Starting Network Manager...
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8019] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:7e7986d9-0598-4067-a630-6e2fad28fcbc)
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8021] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8071] manager[0x557569987000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 13 02:09:59 np0005558317 systemd[1]: Starting Hostname Service...
Dec 13 02:09:59 np0005558317 systemd[1]: Started Hostname Service.
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8746] hostname: hostname: using hostnamed
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8746] hostname: static hostname changed from (none) to "compute-0"
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8749] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8754] manager[0x557569987000]: rfkill: Wi-Fi hardware radio set enabled
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8755] manager[0x557569987000]: rfkill: WWAN hardware radio set enabled
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8776] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-ovs.so)
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8786] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8786] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8787] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8787] manager: Networking is enabled by state file
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8789] settings: Loaded settings plugin: keyfile (internal)
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8792] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8814] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8824] dhcp: init: Using DHCP client 'internal'
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8826] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8830] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8833] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8839] device (lo): Activation: starting connection 'lo' (08c9145e-912a-4b86-86a1-5730fa82ae86)
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8844] device (eth0): carrier: link connected
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8848] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8850] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8851] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8855] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8858] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8862] device (eth1): carrier: link connected
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8866] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8868] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (45fd5f29-c067-5e65-8f11-dae4f04176a0) (indicated)
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8869] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8871] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8875] device (eth1): Activation: starting connection 'ci-private-network' (45fd5f29-c067-5e65-8f11-dae4f04176a0)
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8880] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 13 02:09:59 np0005558317 systemd[1]: Started Network Manager.
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8886] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8888] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8889] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8890] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8891] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8893] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8906] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8909] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8916] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8919] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8922] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8928] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8930] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8936] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8940] dhcp4 (eth0): state changed new lease, address=192.168.25.195
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8944] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.8999] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.9002] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.9007] device (lo): Activation: successful, device activated.
Dec 13 02:09:59 np0005558317 systemd[1]: Starting Network Manager Wait Online...
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.9088] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.9094] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.9116] manager: NetworkManager state is now CONNECTED_LOCAL
Dec 13 02:09:59 np0005558317 NetworkManager[48896]: <info>  [1765609799.9118] device (eth1): Activation: successful, device activated.
Dec 13 02:10:00 np0005558317 python3.9[49094]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:10:00 np0005558317 NetworkManager[48896]: <info>  [1765609800.9271] dhcp6 (eth0): state changed new lease, address=2001:db8::1cf
Dec 13 02:10:00 np0005558317 NetworkManager[48896]: <info>  [1765609800.9284] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 13 02:10:00 np0005558317 NetworkManager[48896]: <info>  [1765609800.9320] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 13 02:10:00 np0005558317 NetworkManager[48896]: <info>  [1765609800.9321] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 13 02:10:00 np0005558317 NetworkManager[48896]: <info>  [1765609800.9325] manager: NetworkManager state is now CONNECTED_SITE
Dec 13 02:10:00 np0005558317 NetworkManager[48896]: <info>  [1765609800.9327] device (eth0): Activation: successful, device activated.
Dec 13 02:10:00 np0005558317 NetworkManager[48896]: <info>  [1765609800.9331] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 13 02:10:00 np0005558317 NetworkManager[48896]: <info>  [1765609800.9334] manager: startup complete
Dec 13 02:10:00 np0005558317 systemd[1]: Finished Network Manager Wait Online.
Dec 13 02:10:07 np0005558317 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 02:10:07 np0005558317 systemd[1]: Starting man-db-cache-update.service...
Dec 13 02:10:07 np0005558317 systemd[1]: Reloading.
Dec 13 02:10:07 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:10:07 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:10:07 np0005558317 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 02:10:07 np0005558317 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 02:10:07 np0005558317 systemd[1]: Finished man-db-cache-update.service.
Dec 13 02:10:07 np0005558317 systemd[1]: run-rb2e60d43613742c98f546a7f916df477.service: Deactivated successfully.
Dec 13 02:10:08 np0005558317 python3.9[49572]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:10:09 np0005558317 python3.9[49724]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:10:09 np0005558317 python3.9[49878]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:10:10 np0005558317 python3.9[50030]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:10:10 np0005558317 python3.9[50184]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:10:10 np0005558317 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 13 02:10:11 np0005558317 python3.9[50336]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:10:11 np0005558317 python3.9[50488]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:10:12 np0005558317 python3.9[50611]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765609811.3657374-229-156092072474139/.source _original_basename=.crx49vks follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:10:12 np0005558317 python3.9[50763]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:10:13 np0005558317 python3.9[50915]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec 13 02:10:13 np0005558317 python3.9[51067]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:10:15 np0005558317 python3.9[51494]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec 13 02:10:16 np0005558317 ansible-async_wrapper.py[51669]: Invoked with j178991850972 300 /home/zuul/.ansible/tmp/ansible-tmp-1765609815.5161934-295-131695455347850/AnsiballZ_edpm_os_net_config.py _
Dec 13 02:10:16 np0005558317 ansible-async_wrapper.py[51672]: Starting module and watcher
Dec 13 02:10:16 np0005558317 ansible-async_wrapper.py[51672]: Start watching 51673 (300)
Dec 13 02:10:16 np0005558317 ansible-async_wrapper.py[51673]: Start module (51673)
Dec 13 02:10:16 np0005558317 ansible-async_wrapper.py[51669]: Return async_wrapper task started.
Dec 13 02:10:16 np0005558317 python3.9[51674]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec 13 02:10:16 np0005558317 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec 13 02:10:16 np0005558317 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec 13 02:10:16 np0005558317 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec 13 02:10:16 np0005558317 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec 13 02:10:16 np0005558317 kernel: cfg80211: failed to load regulatory.db
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.6547] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51675 uid=0 result="success"
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.6563] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51675 uid=0 result="success"
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7005] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7007] audit: op="connection-add" uuid="f94d2fdf-f006-4d53-85ba-35f4397ffee0" name="br-ex-br" pid=51675 uid=0 result="success"
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7019] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7020] audit: op="connection-add" uuid="9696c1a6-1d6b-4f09-9d0c-893ae374b298" name="br-ex-port" pid=51675 uid=0 result="success"
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7030] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7031] audit: op="connection-add" uuid="1d265d2c-addc-4642-bdde-876a43f7ecf2" name="eth1-port" pid=51675 uid=0 result="success"
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7041] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7042] audit: op="connection-add" uuid="14c5abd4-15d5-4d4c-8290-744fc8b1677a" name="vlan20-port" pid=51675 uid=0 result="success"
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7051] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7053] audit: op="connection-add" uuid="dd02dda4-d172-4158-a6e3-d73f6545a27c" name="vlan21-port" pid=51675 uid=0 result="success"
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7061] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7063] audit: op="connection-add" uuid="71f80fa1-f4b3-4c0a-87d1-e45d36aa0390" name="vlan22-port" pid=51675 uid=0 result="success"
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7072] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7073] audit: op="connection-add" uuid="cce860f3-f224-40fc-bb73-c33cf53e9d79" name="vlan23-port" pid=51675 uid=0 result="success"
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7090] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,ipv6.dhcp-timeout,ipv6.may-fail,ipv6.addr-gen-mode,ipv6.method,ipv6.routes,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51675 uid=0 result="success"
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7104] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7105] audit: op="connection-add" uuid="a97420cc-2009-4b2a-854b-70c000ef7060" name="br-ex-if" pid=51675 uid=0 result="success"
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7126] audit: op="connection-update" uuid="45fd5f29-c067-5e65-8f11-dae4f04176a0" name="ci-private-network" args="connection.timestamp,connection.slave-type,connection.controller,connection.master,connection.port-type,ipv6.routes,ipv6.routing-rules,ipv6.addresses,ipv6.addr-gen-mode,ipv6.method,ipv6.dns,ovs-external-ids.data,ipv4.routes,ipv4.routing-rules,ipv4.addresses,ipv4.dns,ipv4.method,ipv4.never-default,ovs-interface.type" pid=51675 uid=0 result="success"
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7139] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7140] audit: op="connection-add" uuid="1326a875-3832-433f-9bbf-a54f44ef0c11" name="vlan20-if" pid=51675 uid=0 result="success"
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7152] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7153] audit: op="connection-add" uuid="088cd2e5-9f2d-45ab-ad1b-b2eddcd3cde9" name="vlan21-if" pid=51675 uid=0 result="success"
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7166] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7168] audit: op="connection-add" uuid="c7c62a70-d2ff-4219-a62b-2dbc76d3ae1d" name="vlan22-if" pid=51675 uid=0 result="success"
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7180] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7181] audit: op="connection-add" uuid="51ad8775-8f6d-4c27-b845-0a5eb300ab9a" name="vlan23-if" pid=51675 uid=0 result="success"
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7191] audit: op="connection-delete" uuid="4a989926-6152-3dd8-8a07-a3472614de2b" name="Wired connection 1" pid=51675 uid=0 result="success"
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7200] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <warn>  [1765609817.7202] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7207] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7210] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (f94d2fdf-f006-4d53-85ba-35f4397ffee0)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7211] audit: op="connection-activate" uuid="f94d2fdf-f006-4d53-85ba-35f4397ffee0" name="br-ex-br" pid=51675 uid=0 result="success"
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7212] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <warn>  [1765609817.7213] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7217] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7220] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (9696c1a6-1d6b-4f09-9d0c-893ae374b298)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7221] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <warn>  [1765609817.7222] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7225] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7228] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (1d265d2c-addc-4642-bdde-876a43f7ecf2)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7229] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <warn>  [1765609817.7230] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7234] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7237] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (14c5abd4-15d5-4d4c-8290-744fc8b1677a)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7238] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <warn>  [1765609817.7239] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7242] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7246] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (dd02dda4-d172-4158-a6e3-d73f6545a27c)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7247] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <warn>  [1765609817.7248] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7251] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7255] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (71f80fa1-f4b3-4c0a-87d1-e45d36aa0390)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7256] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <warn>  [1765609817.7257] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7260] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7264] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (cce860f3-f224-40fc-bb73-c33cf53e9d79)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7265] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7266] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7268] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7272] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <warn>  [1765609817.7273] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7275] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7278] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (a97420cc-2009-4b2a-854b-70c000ef7060)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7279] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7281] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7283] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7283] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7285] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7293] device (eth1): disconnecting for new activation request.
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7293] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7295] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7296] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7297] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7298] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <warn>  [1765609817.7299] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7300] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7303] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (1326a875-3832-433f-9bbf-a54f44ef0c11)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7303] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7305] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7306] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7306] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7308] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <warn>  [1765609817.7308] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7310] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7312] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (088cd2e5-9f2d-45ab-ad1b-b2eddcd3cde9)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7313] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7314] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7315] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7316] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7317] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <warn>  [1765609817.7318] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7319] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7322] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (c7c62a70-d2ff-4219-a62b-2dbc76d3ae1d)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7322] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7324] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7325] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7326] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7327] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <warn>  [1765609817.7328] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7329] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7334] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (51ad8775-8f6d-4c27-b845-0a5eb300ab9a)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7334] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7337] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7338] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7339] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7340] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7351] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv6.may-fail,ipv6.addr-gen-mode,ipv6.method,ipv6.routes,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51675 uid=0 result="success"
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7353] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7355] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7357] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7362] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7364] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7366] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7368] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7369] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7372] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7374] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7376] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7377] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 kernel: ovs-system: entered promiscuous mode
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7388] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7391] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7393] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7394] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7397] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 systemd-udevd[51680]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 02:10:17 np0005558317 kernel: Timeout policy base is empty
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7405] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7407] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7408] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7410] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7413] dhcp4 (eth0): canceled DHCP transaction
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7413] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7413] dhcp4 (eth0): state changed no lease
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7414] dhcp6 (eth0): canceled DHCP transaction
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7414] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7414] dhcp6 (eth0): state changed no lease
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7418] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7425] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7432] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec 13 02:10:17 np0005558317 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 13 02:10:17 np0005558317 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7533] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7537] dhcp4 (eth0): state changed new lease, address=192.168.25.195
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7545] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7566] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7621] device (eth1): Activation: starting connection 'ci-private-network' (45fd5f29-c067-5e65-8f11-dae4f04176a0)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7624] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7627] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7630] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51675 uid=0 result="fail" reason="Device is not activated"
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7634] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7637] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7641] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7647] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7651] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7652] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7653] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7655] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7656] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7657] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7662] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7671] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7673] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7676] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7678] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7681] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7683] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7686] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7689] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7691] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7694] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7697] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7699] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7703] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7716] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7721] device (eth1): state change: ip-check -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7723] device (eth1)[Open vSwitch Port]: detaching ovs interface eth1
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7724] device (eth1): released from controller device eth1
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7728] device (eth1): disconnecting for new activation request.
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7729] audit: op="connection-activate" uuid="45fd5f29-c067-5e65-8f11-dae4f04176a0" name="ci-private-network" pid=51675 uid=0 result="success"
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7756] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7761] device (eth1): Activation: starting connection 'ci-private-network' (45fd5f29-c067-5e65-8f11-dae4f04176a0)
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7764] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51675 uid=0 result="success"
Dec 13 02:10:17 np0005558317 kernel: br-ex: entered promiscuous mode
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7806] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7810] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7819] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7860] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7863] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7898] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7900] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7903] device (eth1): Activation: successful, device activated.
Dec 13 02:10:17 np0005558317 kernel: vlan22: entered promiscuous mode
Dec 13 02:10:17 np0005558317 systemd-udevd[51681]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7933] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7941] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.7994] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.8002] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.8030] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.8031] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.8032] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.8037] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.8042] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.8046] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 13 02:10:17 np0005558317 kernel: vlan21: entered promiscuous mode
Dec 13 02:10:17 np0005558317 kernel: vlan23: entered promiscuous mode
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.8222] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec 13 02:10:17 np0005558317 kernel: vlan20: entered promiscuous mode
Dec 13 02:10:17 np0005558317 systemd-udevd[51679]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.8237] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec 13 02:10:17 np0005558317 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.8269] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.8275] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.8311] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.8317] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.8326] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.8333] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.8335] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.8346] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.8353] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.8371] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.8401] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.8404] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:10:17 np0005558317 NetworkManager[48896]: <info>  [1765609817.8411] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 13 02:10:18 np0005558317 NetworkManager[48896]: <info>  [1765609818.9287] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51675 uid=0 result="success"
Dec 13 02:10:19 np0005558317 NetworkManager[48896]: <info>  [1765609819.0517] checkpoint[0x55756995d950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec 13 02:10:19 np0005558317 NetworkManager[48896]: <info>  [1765609819.0519] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51675 uid=0 result="success"
Dec 13 02:10:19 np0005558317 NetworkManager[48896]: <info>  [1765609819.1794] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51675 uid=0 result="success"
Dec 13 02:10:19 np0005558317 NetworkManager[48896]: <info>  [1765609819.1805] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51675 uid=0 result="success"
Dec 13 02:10:19 np0005558317 NetworkManager[48896]: <info>  [1765609819.3505] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51675 uid=0 result="success"
Dec 13 02:10:19 np0005558317 NetworkManager[48896]: <info>  [1765609819.4481] checkpoint[0x55756995da20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec 13 02:10:19 np0005558317 NetworkManager[48896]: <info>  [1765609819.4484] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51675 uid=0 result="success"
Dec 13 02:10:19 np0005558317 NetworkManager[48896]: <info>  [1765609819.6653] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51675 uid=0 result="success"
Dec 13 02:10:19 np0005558317 NetworkManager[48896]: <info>  [1765609819.6661] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51675 uid=0 result="success"
Dec 13 02:10:19 np0005558317 python3.9[52035]: ansible-ansible.legacy.async_status Invoked with jid=j178991850972.51669 mode=status _async_dir=/root/.ansible_async
Dec 13 02:10:19 np0005558317 NetworkManager[48896]: <info>  [1765609819.8118] audit: op="networking-control" arg="global-dns-configuration" pid=51675 uid=0 result="success"
Dec 13 02:10:19 np0005558317 NetworkManager[48896]: <info>  [1765609819.8130] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf)
Dec 13 02:10:19 np0005558317 NetworkManager[48896]: <info>  [1765609819.8135] audit: op="networking-control" arg="global-dns-configuration" pid=51675 uid=0 result="success"
Dec 13 02:10:19 np0005558317 NetworkManager[48896]: <info>  [1765609819.8173] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51675 uid=0 result="success"
Dec 13 02:10:19 np0005558317 NetworkManager[48896]: <info>  [1765609819.9415] checkpoint[0x55756995daf0]: destroy /org/freedesktop/NetworkManager/Checkpoint/3
Dec 13 02:10:19 np0005558317 NetworkManager[48896]: <info>  [1765609819.9419] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51675 uid=0 result="success"
Dec 13 02:10:19 np0005558317 ansible-async_wrapper.py[51673]: Module complete (51673)
Dec 13 02:10:21 np0005558317 ansible-async_wrapper.py[51672]: Done in kid B.
Dec 13 02:10:23 np0005558317 python3.9[52139]: ansible-ansible.legacy.async_status Invoked with jid=j178991850972.51669 mode=status _async_dir=/root/.ansible_async
Dec 13 02:10:23 np0005558317 python3.9[52239]: ansible-ansible.legacy.async_status Invoked with jid=j178991850972.51669 mode=cleanup _async_dir=/root/.ansible_async
Dec 13 02:10:24 np0005558317 python3.9[52391]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:10:24 np0005558317 python3.9[52514]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765609823.6802783-322-144598896252252/.source.returncode _original_basename=.i7e6mtbn follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:10:24 np0005558317 python3.9[52666]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:10:25 np0005558317 python3.9[52789]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765609824.6031902-338-70353546075863/.source.cfg _original_basename=.w6uebeag follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:10:25 np0005558317 python3.9[52941]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:10:25 np0005558317 systemd[1]: Reloading Network Manager...
Dec 13 02:10:25 np0005558317 NetworkManager[48896]: <info>  [1765609825.9778] audit: op="reload" arg="0" pid=52945 uid=0 result="success"
Dec 13 02:10:25 np0005558317 NetworkManager[48896]: <info>  [1765609825.9783] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec 13 02:10:25 np0005558317 NetworkManager[48896]: <info>  [1765609825.9784] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 13 02:10:25 np0005558317 systemd[1]: Reloaded Network Manager.
Dec 13 02:10:26 np0005558317 systemd[1]: session-9.scope: Deactivated successfully.
Dec 13 02:10:26 np0005558317 systemd[1]: session-9.scope: Consumed 38.647s CPU time.
Dec 13 02:10:26 np0005558317 systemd-logind[745]: Session 9 logged out. Waiting for processes to exit.
Dec 13 02:10:26 np0005558317 systemd-logind[745]: Removed session 9.
Dec 13 02:10:29 np0005558317 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 13 02:10:30 np0005558317 systemd-logind[745]: New session 10 of user zuul.
Dec 13 02:10:30 np0005558317 systemd[1]: Started Session 10 of User zuul.
Dec 13 02:10:31 np0005558317 python3.9[53131]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:10:32 np0005558317 python3.9[53285]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:10:33 np0005558317 python3.9[53479]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:10:33 np0005558317 systemd-logind[745]: Session 10 logged out. Waiting for processes to exit.
Dec 13 02:10:33 np0005558317 systemd[1]: session-10.scope: Deactivated successfully.
Dec 13 02:10:33 np0005558317 systemd[1]: session-10.scope: Consumed 1.711s CPU time.
Dec 13 02:10:33 np0005558317 systemd-logind[745]: Removed session 10.
Dec 13 02:10:36 np0005558317 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 13 02:10:39 np0005558317 systemd-logind[745]: New session 11 of user zuul.
Dec 13 02:10:39 np0005558317 systemd[1]: Started Session 11 of User zuul.
Dec 13 02:10:39 np0005558317 python3.9[53661]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:10:40 np0005558317 python3.9[53815]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:10:41 np0005558317 python3.9[53971]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:10:42 np0005558317 python3.9[54055]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:10:43 np0005558317 python3.9[54209]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:10:44 np0005558317 python3.9[54404]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:10:45 np0005558317 python3.9[54556]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:10:45 np0005558317 systemd[1]: var-lib-containers-storage-overlay-compat3730731190-merged.mount: Deactivated successfully.
Dec 13 02:10:45 np0005558317 podman[54557]: 2025-12-13 07:10:45.184966755 +0000 UTC m=+0.029003722 system refresh
Dec 13 02:10:45 np0005558317 python3.9[54717]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:10:46 np0005558317 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 02:10:46 np0005558317 python3.9[54841]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609845.3247852-79-240408286586893/.source.json follow=False _original_basename=podman_network_config.j2 checksum=9dbfc7db70a09a2b7e7975cbca18d4f65ab65e4c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:10:46 np0005558317 python3.9[54993]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:10:47 np0005558317 python3.9[55116]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765609846.4755156-94-6740131187915/.source.conf follow=False _original_basename=registries.conf.j2 checksum=97c740afc5391c47ef8b0bfc44a8fae07d2d9f9b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:10:47 np0005558317 python3.9[55268]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:10:48 np0005558317 python3.9[55420]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:10:48 np0005558317 python3.9[55572]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:10:49 np0005558317 python3.9[55725]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:10:49 np0005558317 python3.9[55877]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:10:51 np0005558317 python3.9[56030]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:10:52 np0005558317 python3.9[56184]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:10:52 np0005558317 python3.9[56336]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:10:53 np0005558317 python3.9[56488]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:10:53 np0005558317 python3.9[56641]: ansible-service_facts Invoked
Dec 13 02:10:53 np0005558317 network[56658]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 02:10:53 np0005558317 network[56659]: 'network-scripts' will be removed from distribution in near future.
Dec 13 02:10:53 np0005558317 network[56660]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 02:10:57 np0005558317 python3.9[57112]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:10:59 np0005558317 python3.9[57265]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 13 02:10:59 np0005558317 python3.9[57417]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:11:00 np0005558317 python3.9[57542]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765609859.4913623-238-118250819008665/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:00 np0005558317 python3.9[57696]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:11:01 np0005558317 python3.9[57821]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765609860.498351-253-14332203429383/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:02 np0005558317 python3.9[57975]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:02 np0005558317 python3.9[58129]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:11:03 np0005558317 python3.9[58213]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:11:04 np0005558317 python3.9[58367]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:11:05 np0005558317 python3.9[58451]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:11:05 np0005558317 chronyd[754]: chronyd exiting
Dec 13 02:11:05 np0005558317 systemd[1]: Stopping NTP client/server...
Dec 13 02:11:05 np0005558317 systemd[1]: chronyd.service: Deactivated successfully.
Dec 13 02:11:05 np0005558317 systemd[1]: Stopped NTP client/server.
Dec 13 02:11:05 np0005558317 systemd[1]: Starting NTP client/server...
Dec 13 02:11:05 np0005558317 chronyd[58459]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 13 02:11:05 np0005558317 chronyd[58459]: Frequency -4.566 +/- 0.327 ppm read from /var/lib/chrony/drift
Dec 13 02:11:05 np0005558317 chronyd[58459]: Loaded seccomp filter (level 2)
Dec 13 02:11:05 np0005558317 systemd[1]: Started NTP client/server.
Dec 13 02:11:05 np0005558317 systemd[1]: session-11.scope: Deactivated successfully.
Dec 13 02:11:05 np0005558317 systemd[1]: session-11.scope: Consumed 18.888s CPU time.
Dec 13 02:11:05 np0005558317 systemd-logind[745]: Session 11 logged out. Waiting for processes to exit.
Dec 13 02:11:05 np0005558317 systemd-logind[745]: Removed session 11.
Dec 13 02:11:11 np0005558317 systemd-logind[745]: New session 12 of user zuul.
Dec 13 02:11:11 np0005558317 systemd[1]: Started Session 12 of User zuul.
Dec 13 02:11:12 np0005558317 python3.9[58640]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:12 np0005558317 python3.9[58792]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:11:13 np0005558317 python3.9[58915]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765609872.1388798-34-180363317541784/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:13 np0005558317 systemd[1]: session-12.scope: Deactivated successfully.
Dec 13 02:11:13 np0005558317 systemd[1]: session-12.scope: Consumed 1.156s CPU time.
Dec 13 02:11:13 np0005558317 systemd-logind[745]: Session 12 logged out. Waiting for processes to exit.
Dec 13 02:11:13 np0005558317 systemd-logind[745]: Removed session 12.
Dec 13 02:11:18 np0005558317 systemd-logind[745]: New session 13 of user zuul.
Dec 13 02:11:18 np0005558317 systemd[1]: Started Session 13 of User zuul.
Dec 13 02:11:19 np0005558317 python3.9[59093]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:11:20 np0005558317 python3.9[59249]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:20 np0005558317 python3.9[59424]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:11:21 np0005558317 python3.9[59547]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1765609880.175237-41-100752618407180/.source.json _original_basename=.iwawi7m5 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:21 np0005558317 python3.9[59699]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:11:22 np0005558317 python3.9[59822]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765609881.5705988-64-14657562343924/.source _original_basename=.08btlcmp follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:22 np0005558317 python3.9[59974]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:11:23 np0005558317 python3.9[60126]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:11:23 np0005558317 python3.9[60249]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765609882.9051979-88-231211034990961/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:11:24 np0005558317 python3.9[60401]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:11:24 np0005558317 python3.9[60524]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765609883.7394176-88-171973322814779/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:11:25 np0005558317 python3.9[60676]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:25 np0005558317 python3.9[60828]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:11:25 np0005558317 python3.9[60951]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609885.1973333-125-221929501580681/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:26 np0005558317 python3.9[61103]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:11:26 np0005558317 python3.9[61226]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609886.0176003-140-273033734956672/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:27 np0005558317 python3.9[61378]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:11:27 np0005558317 systemd[1]: Reloading.
Dec 13 02:11:27 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:11:27 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:11:27 np0005558317 systemd[1]: Reloading.
Dec 13 02:11:27 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:11:27 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:11:27 np0005558317 systemd[1]: Starting EDPM Container Shutdown...
Dec 13 02:11:27 np0005558317 systemd[1]: Finished EDPM Container Shutdown.
Dec 13 02:11:28 np0005558317 python3.9[61605]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:11:28 np0005558317 python3.9[61728]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609888.0630047-163-181045583486872/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:29 np0005558317 python3.9[61880]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:11:29 np0005558317 python3.9[62003]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609888.9470243-178-138205182362353/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:30 np0005558317 python3.9[62155]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:11:30 np0005558317 systemd[1]: Reloading.
Dec 13 02:11:30 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:11:30 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:11:30 np0005558317 systemd[1]: Reloading.
Dec 13 02:11:30 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:11:30 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:11:30 np0005558317 systemd[1]: Starting Create netns directory...
Dec 13 02:11:30 np0005558317 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 13 02:11:30 np0005558317 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 13 02:11:30 np0005558317 systemd[1]: Finished Create netns directory.
Dec 13 02:11:31 np0005558317 python3.9[62382]: ansible-ansible.builtin.service_facts Invoked
Dec 13 02:11:31 np0005558317 network[62399]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 02:11:31 np0005558317 network[62400]: 'network-scripts' will be removed from distribution in near future.
Dec 13 02:11:31 np0005558317 network[62401]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 02:11:33 np0005558317 python3.9[62663]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:11:33 np0005558317 systemd[1]: Reloading.
Dec 13 02:11:33 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:11:33 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:11:33 np0005558317 systemd[1]: Stopping IPv4 firewall with iptables...
Dec 13 02:11:34 np0005558317 iptables.init[62703]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec 13 02:11:34 np0005558317 iptables.init[62703]: iptables: Flushing firewall rules: [  OK  ]
Dec 13 02:11:34 np0005558317 systemd[1]: iptables.service: Deactivated successfully.
Dec 13 02:11:34 np0005558317 systemd[1]: Stopped IPv4 firewall with iptables.
Dec 13 02:11:34 np0005558317 python3.9[62899]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:11:35 np0005558317 python3.9[63053]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:11:35 np0005558317 systemd[1]: Reloading.
Dec 13 02:11:35 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:11:35 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:11:35 np0005558317 systemd[1]: Starting Netfilter Tables...
Dec 13 02:11:35 np0005558317 systemd[1]: Finished Netfilter Tables.
Dec 13 02:11:36 np0005558317 python3.9[63244]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:11:36 np0005558317 python3.9[63397]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:11:37 np0005558317 python3.9[63522]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765609896.5048082-247-269937766668422/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:37 np0005558317 python3.9[63675]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:11:37 np0005558317 systemd[1]: Reloading OpenSSH server daemon...
Dec 13 02:11:37 np0005558317 systemd[1]: Reloaded OpenSSH server daemon.
Dec 13 02:11:38 np0005558317 python3.9[63831]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:38 np0005558317 python3.9[63983]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:11:39 np0005558317 python3.9[64106]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609898.5001786-278-54989060502867/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:39 np0005558317 python3.9[64258]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 13 02:11:40 np0005558317 systemd[1]: Starting Time & Date Service...
Dec 13 02:11:40 np0005558317 systemd[1]: Started Time & Date Service.
Dec 13 02:11:40 np0005558317 python3.9[64414]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:41 np0005558317 python3.9[64566]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:11:41 np0005558317 python3.9[64689]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765609900.7138972-313-44262870729921/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:41 np0005558317 python3.9[64841]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:11:42 np0005558317 python3.9[64964]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765609901.601812-328-244938739988960/.source.yaml _original_basename=.n9196qbd follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:42 np0005558317 python3.9[65116]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:11:43 np0005558317 python3.9[65239]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609902.4679198-343-57047322663997/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:43 np0005558317 python3.9[65391]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:11:44 np0005558317 python3.9[65544]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:11:44 np0005558317 python3[65697]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 13 02:11:45 np0005558317 python3.9[65849]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:11:45 np0005558317 python3.9[65972]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609904.8921113-382-2629788672146/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:46 np0005558317 python3.9[66124]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:11:46 np0005558317 python3.9[66247]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609905.7352118-397-240507926422713/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:47 np0005558317 python3.9[66399]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:11:47 np0005558317 python3.9[66522]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609906.6095788-412-218695808680096/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:47 np0005558317 python3.9[66674]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:11:48 np0005558317 python3.9[66797]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609907.6108472-427-19212125903793/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:48 np0005558317 python3.9[66949]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:11:49 np0005558317 python3.9[67072]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609908.4783607-442-26662363361057/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:49 np0005558317 python3.9[67224]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:50 np0005558317 python3.9[67376]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:11:50 np0005558317 python3.9[67535]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:51 np0005558317 python3.9[67688]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:51 np0005558317 python3.9[67840]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:11:52 np0005558317 python3.9[67992]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 13 02:11:53 np0005558317 python3.9[68145]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 13 02:11:53 np0005558317 systemd[1]: session-13.scope: Deactivated successfully.
Dec 13 02:11:53 np0005558317 systemd[1]: session-13.scope: Consumed 24.831s CPU time.
Dec 13 02:11:53 np0005558317 systemd-logind[745]: Session 13 logged out. Waiting for processes to exit.
Dec 13 02:11:53 np0005558317 systemd-logind[745]: Removed session 13.
Dec 13 02:11:58 np0005558317 systemd-logind[745]: New session 14 of user zuul.
Dec 13 02:11:58 np0005558317 systemd[1]: Started Session 14 of User zuul.
Dec 13 02:11:59 np0005558317 python3.9[68327]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 13 02:11:59 np0005558317 python3.9[68479]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:12:00 np0005558317 python3.9[68631]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:12:01 np0005558317 python3.9[68783]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCekpfjOZMQHu4kGkMmbnPcCtz1ykBu18rwwghFZ6JdZNeLGT0geVZzeGTxx67o32Xucl5rndeaEtZvZfxTXM1W/3Z9ig0x1tTtqK2lTLjxcw4+AxChtq8Mt1LZKUi2MHVUdDkB8UwKvPPC6k5NFQRBu1jsX63zDiUCudXQlFm49OLA8BZh7VuZYlpOMnuiPC9cWsSAehEH4hmIdqlyl7xhfBn/4IId10yPH4Bev4qk4z212G730uw0ldn9RfPP2Batr31zKwOCUveVL5V48yK6VIj2O4uztbh6yagWlbqPwmUoYdvokyMVmONCStsc8BDSSaTmH7gv6cm1tfpfpKJlBo25kpuVocNQaaZB8/x71weojzujWfYBPfwbGARRkq9lgjdmyLJot9XdtcDkAKNeE6nzDo29nj1SpYzDYu2OrwI8RN9TLEQyXyUi80L4ELrI2WrVf5NwIvfG0ZKHurHxEDYcJKris+z3lCdPHRbw/D0HAhFZ6YnnViCeqLe+XL0=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDlhQSLisbnaeA/5eqQ07vXPLvOWH+wLodInwcPHjCbq#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL+1SrJ/t+tkNcFtDd1R0f0/5owYzeRM7hR2TrpSEQtZk5y2BWR+htC7NOo7cYghMztLnyJaOIsNSp9NjO5UEBE=#012 create=True mode=0644 path=/tmp/ansible.up0z_r17 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:12:01 np0005558317 python3.9[68935]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.up0z_r17' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:12:02 np0005558317 python3.9[69089]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.up0z_r17 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:12:02 np0005558317 systemd[1]: session-14.scope: Deactivated successfully.
Dec 13 02:12:02 np0005558317 systemd[1]: session-14.scope: Consumed 2.380s CPU time.
Dec 13 02:12:02 np0005558317 systemd-logind[745]: Session 14 logged out. Waiting for processes to exit.
Dec 13 02:12:02 np0005558317 systemd-logind[745]: Removed session 14.
Dec 13 02:12:07 np0005558317 systemd-logind[745]: New session 15 of user zuul.
Dec 13 02:12:07 np0005558317 systemd[1]: Started Session 15 of User zuul.
Dec 13 02:12:08 np0005558317 python3.9[69267]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:12:09 np0005558317 python3.9[69423]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 13 02:12:09 np0005558317 python3.9[69577]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:12:10 np0005558317 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 13 02:12:10 np0005558317 python3.9[69732]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:12:11 np0005558317 python3.9[69885]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:12:11 np0005558317 python3.9[70039]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:12:12 np0005558317 python3.9[70194]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:12:12 np0005558317 systemd[1]: session-15.scope: Deactivated successfully.
Dec 13 02:12:12 np0005558317 systemd[1]: session-15.scope: Consumed 3.206s CPU time.
Dec 13 02:12:12 np0005558317 systemd-logind[745]: Session 15 logged out. Waiting for processes to exit.
Dec 13 02:12:12 np0005558317 systemd-logind[745]: Removed session 15.
Dec 13 02:12:17 np0005558317 systemd-logind[745]: New session 16 of user zuul.
Dec 13 02:12:17 np0005558317 systemd[1]: Started Session 16 of User zuul.
Dec 13 02:12:18 np0005558317 python3.9[70372]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:12:19 np0005558317 python3.9[70528]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:12:20 np0005558317 python3.9[70612]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 13 02:12:21 np0005558317 python3.9[70763]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:12:22 np0005558317 python3.9[70914]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 13 02:12:23 np0005558317 python3.9[71064]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:12:23 np0005558317 python3.9[71214]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:12:24 np0005558317 systemd[1]: session-16.scope: Deactivated successfully.
Dec 13 02:12:24 np0005558317 systemd[1]: session-16.scope: Consumed 4.394s CPU time.
Dec 13 02:12:24 np0005558317 systemd-logind[745]: Session 16 logged out. Waiting for processes to exit.
Dec 13 02:12:24 np0005558317 systemd-logind[745]: Removed session 16.
Dec 13 02:12:30 np0005558317 systemd-logind[745]: New session 17 of user zuul.
Dec 13 02:12:30 np0005558317 systemd[1]: Started Session 17 of User zuul.
Dec 13 02:12:34 np0005558317 python3[71980]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:12:35 np0005558317 python3[72071]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 13 02:12:37 np0005558317 python3[72098]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 02:12:37 np0005558317 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 02:12:37 np0005558317 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 02:12:37 np0005558317 python3[72125]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:12:37 np0005558317 kernel: loop: module loaded
Dec 13 02:12:37 np0005558317 kernel: loop3: detected capacity change from 0 to 41943040
Dec 13 02:12:37 np0005558317 python3[72160]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:12:37 np0005558317 lvm[72163]: PV /dev/loop3 not used.
Dec 13 02:12:37 np0005558317 lvm[72172]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:12:37 np0005558317 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec 13 02:12:37 np0005558317 lvm[72174]:  1 logical volume(s) in volume group "ceph_vg0" now active
Dec 13 02:12:37 np0005558317 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec 13 02:12:38 np0005558317 python3[72252]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 02:12:38 np0005558317 python3[72325]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765609957.8479996-36604-11195434469407/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:12:38 np0005558317 python3[72375]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:12:38 np0005558317 systemd[1]: Reloading.
Dec 13 02:12:39 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:12:39 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:12:39 np0005558317 systemd[1]: Starting Ceph OSD losetup...
Dec 13 02:12:39 np0005558317 bash[72414]: /dev/loop3: [64513]:4327953 (/var/lib/ceph-osd-0.img)
Dec 13 02:12:39 np0005558317 systemd[1]: Finished Ceph OSD losetup.
Dec 13 02:12:39 np0005558317 lvm[72415]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:12:39 np0005558317 lvm[72415]: VG ceph_vg0 finished
Dec 13 02:12:39 np0005558317 python3[72441]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 13 02:12:40 np0005558317 python3[72468]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 02:12:40 np0005558317 python3[72494]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:12:40 np0005558317 kernel: loop4: detected capacity change from 0 to 41943040
Dec 13 02:12:41 np0005558317 python3[72526]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:12:41 np0005558317 lvm[72529]: PV /dev/loop4 not used.
Dec 13 02:12:41 np0005558317 lvm[72538]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:12:41 np0005558317 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Dec 13 02:12:41 np0005558317 lvm[72540]:  1 logical volume(s) in volume group "ceph_vg1" now active
Dec 13 02:12:41 np0005558317 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Dec 13 02:12:41 np0005558317 python3[72618]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 02:12:41 np0005558317 python3[72691]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765609961.447473-36631-219866393732014/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:12:42 np0005558317 python3[72741]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:12:42 np0005558317 systemd[1]: Reloading.
Dec 13 02:12:42 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:12:42 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:12:42 np0005558317 systemd[1]: Starting Ceph OSD losetup...
Dec 13 02:12:42 np0005558317 bash[72781]: /dev/loop4: [64513]:4327955 (/var/lib/ceph-osd-1.img)
Dec 13 02:12:42 np0005558317 systemd[1]: Finished Ceph OSD losetup.
Dec 13 02:12:42 np0005558317 lvm[72782]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:12:42 np0005558317 lvm[72782]: VG ceph_vg1 finished
Dec 13 02:12:42 np0005558317 python3[72808]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 13 02:12:43 np0005558317 python3[72835]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 02:12:44 np0005558317 python3[72861]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:12:44 np0005558317 kernel: loop5: detected capacity change from 0 to 41943040
Dec 13 02:12:44 np0005558317 python3[72893]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:12:44 np0005558317 lvm[72896]: PV /dev/loop5 not used.
Dec 13 02:12:44 np0005558317 lvm[72906]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:12:44 np0005558317 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Dec 13 02:12:44 np0005558317 lvm[72908]:  1 logical volume(s) in volume group "ceph_vg2" now active
Dec 13 02:12:44 np0005558317 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Dec 13 02:12:44 np0005558317 python3[72986]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 02:12:45 np0005558317 python3[73059]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765609964.698612-36658-50672676213534/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:12:45 np0005558317 python3[73109]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:12:45 np0005558317 systemd[1]: Reloading.
Dec 13 02:12:45 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:12:45 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:12:45 np0005558317 systemd[1]: Starting Ceph OSD losetup...
Dec 13 02:12:45 np0005558317 bash[73148]: /dev/loop5: [64513]:4327967 (/var/lib/ceph-osd-2.img)
Dec 13 02:12:45 np0005558317 systemd[1]: Finished Ceph OSD losetup.
Dec 13 02:12:45 np0005558317 lvm[73149]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:12:45 np0005558317 lvm[73149]: VG ceph_vg2 finished
Dec 13 02:12:47 np0005558317 python3[73173]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:12:49 np0005558317 python3[73266]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 13 02:12:50 np0005558317 python3[73323]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 13 02:12:53 np0005558317 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 02:12:53 np0005558317 systemd[1]: Starting man-db-cache-update.service...
Dec 13 02:12:54 np0005558317 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 02:12:54 np0005558317 systemd[1]: Finished man-db-cache-update.service.
Dec 13 02:12:54 np0005558317 systemd[1]: run-r5911b45104d548e08c8fbb96fa64b6ce.service: Deactivated successfully.
Dec 13 02:12:54 np0005558317 python3[73445]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 02:12:54 np0005558317 python3[73473]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:12:54 np0005558317 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 02:12:55 np0005558317 python3[73509]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:12:55 np0005558317 python3[73535]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:12:56 np0005558317 python3[73613]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 02:12:56 np0005558317 python3[73686]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765609975.884619-36806-246648357210251/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:12:56 np0005558317 python3[73788]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 02:12:57 np0005558317 python3[73861]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765609976.7038133-36824-145414303229595/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:12:57 np0005558317 python3[73911]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 02:12:57 np0005558317 python3[73939]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 02:12:57 np0005558317 python3[73967]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 02:12:58 np0005558317 python3[73995]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:12:58 np0005558317 systemd[1]: Created slice User Slice of UID 42477.
Dec 13 02:12:58 np0005558317 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 13 02:12:58 np0005558317 systemd-logind[745]: New session 18 of user ceph-admin.
Dec 13 02:12:58 np0005558317 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 13 02:12:58 np0005558317 systemd[1]: Starting User Manager for UID 42477...
Dec 13 02:12:58 np0005558317 systemd[74003]: Queued start job for default target Main User Target.
Dec 13 02:12:58 np0005558317 systemd[74003]: Created slice User Application Slice.
Dec 13 02:12:58 np0005558317 systemd[74003]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 13 02:12:58 np0005558317 systemd[74003]: Started Daily Cleanup of User's Temporary Directories.
Dec 13 02:12:58 np0005558317 systemd[74003]: Reached target Paths.
Dec 13 02:12:58 np0005558317 systemd[74003]: Reached target Timers.
Dec 13 02:12:58 np0005558317 systemd[74003]: Starting D-Bus User Message Bus Socket...
Dec 13 02:12:58 np0005558317 systemd[74003]: Starting Create User's Volatile Files and Directories...
Dec 13 02:12:58 np0005558317 systemd[74003]: Finished Create User's Volatile Files and Directories.
Dec 13 02:12:58 np0005558317 systemd[74003]: Listening on D-Bus User Message Bus Socket.
Dec 13 02:12:58 np0005558317 systemd[74003]: Reached target Sockets.
Dec 13 02:12:58 np0005558317 systemd[74003]: Reached target Basic System.
Dec 13 02:12:58 np0005558317 systemd[74003]: Reached target Main User Target.
Dec 13 02:12:58 np0005558317 systemd[74003]: Startup finished in 94ms.
Dec 13 02:12:58 np0005558317 systemd[1]: Started User Manager for UID 42477.
Dec 13 02:12:58 np0005558317 systemd[1]: Started Session 18 of User ceph-admin.
Dec 13 02:12:58 np0005558317 systemd[1]: session-18.scope: Deactivated successfully.
Dec 13 02:12:58 np0005558317 systemd-logind[745]: Session 18 logged out. Waiting for processes to exit.
Dec 13 02:12:58 np0005558317 systemd-logind[745]: Removed session 18.
Dec 13 02:12:58 np0005558317 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 02:12:58 np0005558317 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 02:13:01 np0005558317 systemd[1]: var-lib-containers-storage-overlay-compat1829252655-merged.mount: Deactivated successfully.
Dec 13 02:13:01 np0005558317 systemd[1]: var-lib-containers-storage-overlay-compat1829252655-lower\x2dmapped.mount: Deactivated successfully.
Dec 13 02:13:08 np0005558317 systemd[1]: Stopping User Manager for UID 42477...
Dec 13 02:13:08 np0005558317 systemd[74003]: Activating special unit Exit the Session...
Dec 13 02:13:08 np0005558317 systemd[74003]: Stopped target Main User Target.
Dec 13 02:13:08 np0005558317 systemd[74003]: Stopped target Basic System.
Dec 13 02:13:08 np0005558317 systemd[74003]: Stopped target Paths.
Dec 13 02:13:08 np0005558317 systemd[74003]: Stopped target Sockets.
Dec 13 02:13:08 np0005558317 systemd[74003]: Stopped target Timers.
Dec 13 02:13:08 np0005558317 systemd[74003]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 13 02:13:08 np0005558317 systemd[74003]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 13 02:13:08 np0005558317 systemd[74003]: Closed D-Bus User Message Bus Socket.
Dec 13 02:13:08 np0005558317 systemd[74003]: Stopped Create User's Volatile Files and Directories.
Dec 13 02:13:08 np0005558317 systemd[74003]: Removed slice User Application Slice.
Dec 13 02:13:08 np0005558317 systemd[74003]: Reached target Shutdown.
Dec 13 02:13:08 np0005558317 systemd[74003]: Finished Exit the Session.
Dec 13 02:13:08 np0005558317 systemd[74003]: Reached target Exit the Session.
Dec 13 02:13:08 np0005558317 systemd[1]: user@42477.service: Deactivated successfully.
Dec 13 02:13:08 np0005558317 systemd[1]: Stopped User Manager for UID 42477.
Dec 13 02:13:08 np0005558317 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec 13 02:13:08 np0005558317 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec 13 02:13:08 np0005558317 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec 13 02:13:08 np0005558317 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec 13 02:13:08 np0005558317 systemd[1]: Removed slice User Slice of UID 42477.
Dec 13 02:13:16 np0005558317 chronyd[58459]: Selected source 141.11.228.173 (pool.ntp.org)
Dec 13 02:13:18 np0005558317 podman[74092]: 2025-12-13 07:13:18.69487066 +0000 UTC m=+19.873987126 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:18 np0005558317 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 02:13:18 np0005558317 podman[74144]: 2025-12-13 07:13:18.741129831 +0000 UTC m=+0.027033535 container create 843090b43018c328505c29f9accca29716b5f498cea87f750d2998aea9dcef6f (image=quay.io/ceph/ceph:v20, name=ecstatic_sammet, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:13:18 np0005558317 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck1329027727-merged.mount: Deactivated successfully.
Dec 13 02:13:18 np0005558317 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec 13 02:13:18 np0005558317 systemd[1]: Started libpod-conmon-843090b43018c328505c29f9accca29716b5f498cea87f750d2998aea9dcef6f.scope.
Dec 13 02:13:18 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:18 np0005558317 podman[74144]: 2025-12-13 07:13:18.805304934 +0000 UTC m=+0.091208659 container init 843090b43018c328505c29f9accca29716b5f498cea87f750d2998aea9dcef6f (image=quay.io/ceph/ceph:v20, name=ecstatic_sammet, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:13:18 np0005558317 podman[74144]: 2025-12-13 07:13:18.810917366 +0000 UTC m=+0.096821071 container start 843090b43018c328505c29f9accca29716b5f498cea87f750d2998aea9dcef6f (image=quay.io/ceph/ceph:v20, name=ecstatic_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 02:13:18 np0005558317 podman[74144]: 2025-12-13 07:13:18.814462601 +0000 UTC m=+0.100366306 container attach 843090b43018c328505c29f9accca29716b5f498cea87f750d2998aea9dcef6f (image=quay.io/ceph/ceph:v20, name=ecstatic_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 02:13:18 np0005558317 podman[74144]: 2025-12-13 07:13:18.730417912 +0000 UTC m=+0.016321637 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:18 np0005558317 ecstatic_sammet[74157]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Dec 13 02:13:18 np0005558317 systemd[1]: libpod-843090b43018c328505c29f9accca29716b5f498cea87f750d2998aea9dcef6f.scope: Deactivated successfully.
Dec 13 02:13:18 np0005558317 podman[74144]: 2025-12-13 07:13:18.892663855 +0000 UTC m=+0.178567560 container died 843090b43018c328505c29f9accca29716b5f498cea87f750d2998aea9dcef6f (image=quay.io/ceph/ceph:v20, name=ecstatic_sammet, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 02:13:18 np0005558317 podman[74144]: 2025-12-13 07:13:18.910463899 +0000 UTC m=+0.196367604 container remove 843090b43018c328505c29f9accca29716b5f498cea87f750d2998aea9dcef6f (image=quay.io/ceph/ceph:v20, name=ecstatic_sammet, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:13:18 np0005558317 systemd[1]: libpod-conmon-843090b43018c328505c29f9accca29716b5f498cea87f750d2998aea9dcef6f.scope: Deactivated successfully.
Dec 13 02:13:18 np0005558317 podman[74170]: 2025-12-13 07:13:18.955504699 +0000 UTC m=+0.028646289 container create f35db3dabbd45f5a4aabf7e14a41193930df696cb0bf8e7d0a598778472c4954 (image=quay.io/ceph/ceph:v20, name=eager_antonelli, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 02:13:18 np0005558317 systemd[1]: Started libpod-conmon-f35db3dabbd45f5a4aabf7e14a41193930df696cb0bf8e7d0a598778472c4954.scope.
Dec 13 02:13:18 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:19 np0005558317 podman[74170]: 2025-12-13 07:13:19.003241508 +0000 UTC m=+0.076383097 container init f35db3dabbd45f5a4aabf7e14a41193930df696cb0bf8e7d0a598778472c4954 (image=quay.io/ceph/ceph:v20, name=eager_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 02:13:19 np0005558317 podman[74170]: 2025-12-13 07:13:19.007599331 +0000 UTC m=+0.080740920 container start f35db3dabbd45f5a4aabf7e14a41193930df696cb0bf8e7d0a598778472c4954 (image=quay.io/ceph/ceph:v20, name=eager_antonelli, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:13:19 np0005558317 podman[74170]: 2025-12-13 07:13:19.008610061 +0000 UTC m=+0.081751641 container attach f35db3dabbd45f5a4aabf7e14a41193930df696cb0bf8e7d0a598778472c4954 (image=quay.io/ceph/ceph:v20, name=eager_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 02:13:19 np0005558317 eager_antonelli[74184]: 167 167
Dec 13 02:13:19 np0005558317 systemd[1]: libpod-f35db3dabbd45f5a4aabf7e14a41193930df696cb0bf8e7d0a598778472c4954.scope: Deactivated successfully.
Dec 13 02:13:19 np0005558317 podman[74170]: 2025-12-13 07:13:19.010745636 +0000 UTC m=+0.083887225 container died f35db3dabbd45f5a4aabf7e14a41193930df696cb0bf8e7d0a598778472c4954 (image=quay.io/ceph/ceph:v20, name=eager_antonelli, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 02:13:19 np0005558317 podman[74170]: 2025-12-13 07:13:19.027735559 +0000 UTC m=+0.100877148 container remove f35db3dabbd45f5a4aabf7e14a41193930df696cb0bf8e7d0a598778472c4954 (image=quay.io/ceph/ceph:v20, name=eager_antonelli, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 02:13:19 np0005558317 podman[74170]: 2025-12-13 07:13:18.944317838 +0000 UTC m=+0.017459426 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:19 np0005558317 systemd[1]: libpod-conmon-f35db3dabbd45f5a4aabf7e14a41193930df696cb0bf8e7d0a598778472c4954.scope: Deactivated successfully.
Dec 13 02:13:19 np0005558317 podman[74198]: 2025-12-13 07:13:19.073056355 +0000 UTC m=+0.027654473 container create 292dfa3faea3a69a7d71aa833970bb632ab3b295fffdeadb2df1aa0a6af12664 (image=quay.io/ceph/ceph:v20, name=gracious_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:13:19 np0005558317 systemd[1]: Started libpod-conmon-292dfa3faea3a69a7d71aa833970bb632ab3b295fffdeadb2df1aa0a6af12664.scope.
Dec 13 02:13:19 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:19 np0005558317 podman[74198]: 2025-12-13 07:13:19.131662308 +0000 UTC m=+0.086260445 container init 292dfa3faea3a69a7d71aa833970bb632ab3b295fffdeadb2df1aa0a6af12664 (image=quay.io/ceph/ceph:v20, name=gracious_austin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:13:19 np0005558317 podman[74198]: 2025-12-13 07:13:19.135733101 +0000 UTC m=+0.090331219 container start 292dfa3faea3a69a7d71aa833970bb632ab3b295fffdeadb2df1aa0a6af12664 (image=quay.io/ceph/ceph:v20, name=gracious_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:13:19 np0005558317 podman[74198]: 2025-12-13 07:13:19.138043345 +0000 UTC m=+0.092641463 container attach 292dfa3faea3a69a7d71aa833970bb632ab3b295fffdeadb2df1aa0a6af12664 (image=quay.io/ceph/ceph:v20, name=gracious_austin, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 02:13:19 np0005558317 gracious_austin[74213]: AQAPEj1pehcICRAARZufc0en6jBlZpV54gKkPA==
Dec 13 02:13:19 np0005558317 systemd[1]: libpod-292dfa3faea3a69a7d71aa833970bb632ab3b295fffdeadb2df1aa0a6af12664.scope: Deactivated successfully.
Dec 13 02:13:19 np0005558317 podman[74198]: 2025-12-13 07:13:19.154538977 +0000 UTC m=+0.109137096 container died 292dfa3faea3a69a7d71aa833970bb632ab3b295fffdeadb2df1aa0a6af12664 (image=quay.io/ceph/ceph:v20, name=gracious_austin, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:13:19 np0005558317 podman[74198]: 2025-12-13 07:13:19.062011609 +0000 UTC m=+0.016609747 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:19 np0005558317 podman[74198]: 2025-12-13 07:13:19.16935334 +0000 UTC m=+0.123951458 container remove 292dfa3faea3a69a7d71aa833970bb632ab3b295fffdeadb2df1aa0a6af12664 (image=quay.io/ceph/ceph:v20, name=gracious_austin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:13:19 np0005558317 systemd[1]: libpod-conmon-292dfa3faea3a69a7d71aa833970bb632ab3b295fffdeadb2df1aa0a6af12664.scope: Deactivated successfully.
Dec 13 02:13:19 np0005558317 podman[74230]: 2025-12-13 07:13:19.212465934 +0000 UTC m=+0.027649945 container create 9f04ed7439dd2b39a95b2f2c5be2cbb1773b35da21cc4a0139ea201d157b5196 (image=quay.io/ceph/ceph:v20, name=sleepy_stonebraker, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:13:19 np0005558317 systemd[1]: Started libpod-conmon-9f04ed7439dd2b39a95b2f2c5be2cbb1773b35da21cc4a0139ea201d157b5196.scope.
Dec 13 02:13:19 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:19 np0005558317 podman[74230]: 2025-12-13 07:13:19.248752205 +0000 UTC m=+0.063936226 container init 9f04ed7439dd2b39a95b2f2c5be2cbb1773b35da21cc4a0139ea201d157b5196 (image=quay.io/ceph/ceph:v20, name=sleepy_stonebraker, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Dec 13 02:13:19 np0005558317 podman[74230]: 2025-12-13 07:13:19.252836524 +0000 UTC m=+0.068020535 container start 9f04ed7439dd2b39a95b2f2c5be2cbb1773b35da21cc4a0139ea201d157b5196 (image=quay.io/ceph/ceph:v20, name=sleepy_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:13:19 np0005558317 podman[74230]: 2025-12-13 07:13:19.25381325 +0000 UTC m=+0.068997271 container attach 9f04ed7439dd2b39a95b2f2c5be2cbb1773b35da21cc4a0139ea201d157b5196 (image=quay.io/ceph/ceph:v20, name=sleepy_stonebraker, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:13:19 np0005558317 sleepy_stonebraker[74243]: AQAPEj1phbX6DxAAPe7HcwW+L6g3PIj7SIL+Ag==
Dec 13 02:13:19 np0005558317 systemd[1]: libpod-9f04ed7439dd2b39a95b2f2c5be2cbb1773b35da21cc4a0139ea201d157b5196.scope: Deactivated successfully.
Dec 13 02:13:19 np0005558317 podman[74230]: 2025-12-13 07:13:19.270745183 +0000 UTC m=+0.085929193 container died 9f04ed7439dd2b39a95b2f2c5be2cbb1773b35da21cc4a0139ea201d157b5196 (image=quay.io/ceph/ceph:v20, name=sleepy_stonebraker, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 02:13:19 np0005558317 podman[74230]: 2025-12-13 07:13:19.285119047 +0000 UTC m=+0.100303058 container remove 9f04ed7439dd2b39a95b2f2c5be2cbb1773b35da21cc4a0139ea201d157b5196 (image=quay.io/ceph/ceph:v20, name=sleepy_stonebraker, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:13:19 np0005558317 podman[74230]: 2025-12-13 07:13:19.201953771 +0000 UTC m=+0.017137781 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:19 np0005558317 systemd[1]: libpod-conmon-9f04ed7439dd2b39a95b2f2c5be2cbb1773b35da21cc4a0139ea201d157b5196.scope: Deactivated successfully.
Dec 13 02:13:19 np0005558317 podman[74259]: 2025-12-13 07:13:19.327155879 +0000 UTC m=+0.025906026 container create 8e2665a2599968bfed91303ed96c93c8ada409a2b26f8abcbab4ab57f3e71334 (image=quay.io/ceph/ceph:v20, name=fervent_euclid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:13:19 np0005558317 systemd[1]: Started libpod-conmon-8e2665a2599968bfed91303ed96c93c8ada409a2b26f8abcbab4ab57f3e71334.scope.
Dec 13 02:13:19 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:19 np0005558317 podman[74259]: 2025-12-13 07:13:19.368161222 +0000 UTC m=+0.066911389 container init 8e2665a2599968bfed91303ed96c93c8ada409a2b26f8abcbab4ab57f3e71334 (image=quay.io/ceph/ceph:v20, name=fervent_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 02:13:19 np0005558317 podman[74259]: 2025-12-13 07:13:19.37230402 +0000 UTC m=+0.071054168 container start 8e2665a2599968bfed91303ed96c93c8ada409a2b26f8abcbab4ab57f3e71334 (image=quay.io/ceph/ceph:v20, name=fervent_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:13:19 np0005558317 podman[74259]: 2025-12-13 07:13:19.377404178 +0000 UTC m=+0.076154326 container attach 8e2665a2599968bfed91303ed96c93c8ada409a2b26f8abcbab4ab57f3e71334 (image=quay.io/ceph/ceph:v20, name=fervent_euclid, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:13:19 np0005558317 fervent_euclid[74273]: AQAPEj1p07ghFxAAwTzIQ0nXvXHqoZYms/j+zg==
Dec 13 02:13:19 np0005558317 systemd[1]: libpod-8e2665a2599968bfed91303ed96c93c8ada409a2b26f8abcbab4ab57f3e71334.scope: Deactivated successfully.
Dec 13 02:13:19 np0005558317 conmon[74273]: conmon 8e2665a2599968bfed91 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e2665a2599968bfed91303ed96c93c8ada409a2b26f8abcbab4ab57f3e71334.scope/container/memory.events
Dec 13 02:13:19 np0005558317 podman[74259]: 2025-12-13 07:13:19.316705421 +0000 UTC m=+0.015455568 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:19 np0005558317 podman[74280]: 2025-12-13 07:13:19.420894163 +0000 UTC m=+0.016366441 container died 8e2665a2599968bfed91303ed96c93c8ada409a2b26f8abcbab4ab57f3e71334 (image=quay.io/ceph/ceph:v20, name=fervent_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:13:19 np0005558317 podman[74280]: 2025-12-13 07:13:19.695948734 +0000 UTC m=+0.291421003 container remove 8e2665a2599968bfed91303ed96c93c8ada409a2b26f8abcbab4ab57f3e71334 (image=quay.io/ceph/ceph:v20, name=fervent_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:13:19 np0005558317 systemd[1]: var-lib-containers-storage-overlay-6310d58fa2854cc669f16badea06eb1e982e6e6ca603511916af7c706c4087fe-merged.mount: Deactivated successfully.
Dec 13 02:13:19 np0005558317 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 02:13:19 np0005558317 systemd[1]: libpod-conmon-8e2665a2599968bfed91303ed96c93c8ada409a2b26f8abcbab4ab57f3e71334.scope: Deactivated successfully.
Dec 13 02:13:19 np0005558317 podman[74292]: 2025-12-13 07:13:19.742755024 +0000 UTC m=+0.027614298 container create 5b6f792928ad4c4a8ecafc4d3aebb64227b89fc326c49c49af45f343140f6fbc (image=quay.io/ceph/ceph:v20, name=friendly_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 02:13:19 np0005558317 systemd[1]: Started libpod-conmon-5b6f792928ad4c4a8ecafc4d3aebb64227b89fc326c49c49af45f343140f6fbc.scope.
Dec 13 02:13:19 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:19 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1596962d5b78904796d452d34521e5a0050d6cc353e52c16371e4a815a7ca87/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:19 np0005558317 podman[74292]: 2025-12-13 07:13:19.784605826 +0000 UTC m=+0.069465120 container init 5b6f792928ad4c4a8ecafc4d3aebb64227b89fc326c49c49af45f343140f6fbc (image=quay.io/ceph/ceph:v20, name=friendly_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 02:13:19 np0005558317 podman[74292]: 2025-12-13 07:13:19.788466884 +0000 UTC m=+0.073326158 container start 5b6f792928ad4c4a8ecafc4d3aebb64227b89fc326c49c49af45f343140f6fbc (image=quay.io/ceph/ceph:v20, name=friendly_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 02:13:19 np0005558317 podman[74292]: 2025-12-13 07:13:19.789554289 +0000 UTC m=+0.074413563 container attach 5b6f792928ad4c4a8ecafc4d3aebb64227b89fc326c49c49af45f343140f6fbc (image=quay.io/ceph/ceph:v20, name=friendly_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 02:13:19 np0005558317 friendly_ellis[74305]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec 13 02:13:19 np0005558317 friendly_ellis[74305]: setting min_mon_release = tentacle
Dec 13 02:13:19 np0005558317 friendly_ellis[74305]: /usr/bin/monmaptool: set fsid to 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 02:13:19 np0005558317 friendly_ellis[74305]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec 13 02:13:19 np0005558317 systemd[1]: libpod-5b6f792928ad4c4a8ecafc4d3aebb64227b89fc326c49c49af45f343140f6fbc.scope: Deactivated successfully.
Dec 13 02:13:19 np0005558317 podman[74292]: 2025-12-13 07:13:19.812685177 +0000 UTC m=+0.097544451 container died 5b6f792928ad4c4a8ecafc4d3aebb64227b89fc326c49c49af45f343140f6fbc (image=quay.io/ceph/ceph:v20, name=friendly_ellis, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:13:19 np0005558317 systemd[1]: var-lib-containers-storage-overlay-f1596962d5b78904796d452d34521e5a0050d6cc353e52c16371e4a815a7ca87-merged.mount: Deactivated successfully.
Dec 13 02:13:19 np0005558317 podman[74292]: 2025-12-13 07:13:19.731826507 +0000 UTC m=+0.016685801 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:19 np0005558317 podman[74292]: 2025-12-13 07:13:19.828984139 +0000 UTC m=+0.113843414 container remove 5b6f792928ad4c4a8ecafc4d3aebb64227b89fc326c49c49af45f343140f6fbc (image=quay.io/ceph/ceph:v20, name=friendly_ellis, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 02:13:19 np0005558317 systemd[1]: libpod-conmon-5b6f792928ad4c4a8ecafc4d3aebb64227b89fc326c49c49af45f343140f6fbc.scope: Deactivated successfully.
Dec 13 02:13:19 np0005558317 podman[74323]: 2025-12-13 07:13:19.870787673 +0000 UTC m=+0.026485806 container create b0a0435bdd1f44dc99c22013de3955bec34d9b791d2fc91051e41c9171b2e950 (image=quay.io/ceph/ceph:v20, name=adoring_dhawan, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Dec 13 02:13:19 np0005558317 systemd[1]: Started libpod-conmon-b0a0435bdd1f44dc99c22013de3955bec34d9b791d2fc91051e41c9171b2e950.scope.
Dec 13 02:13:19 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:19 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10c118276240ee5241c8c63028f9ca6118e65c93e4e4183af1c99e4ca7b28e5a/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:19 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10c118276240ee5241c8c63028f9ca6118e65c93e4e4183af1c99e4ca7b28e5a/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:19 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10c118276240ee5241c8c63028f9ca6118e65c93e4e4183af1c99e4ca7b28e5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:19 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10c118276240ee5241c8c63028f9ca6118e65c93e4e4183af1c99e4ca7b28e5a/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:19 np0005558317 podman[74323]: 2025-12-13 07:13:19.917617456 +0000 UTC m=+0.073315600 container init b0a0435bdd1f44dc99c22013de3955bec34d9b791d2fc91051e41c9171b2e950 (image=quay.io/ceph/ceph:v20, name=adoring_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Dec 13 02:13:19 np0005558317 podman[74323]: 2025-12-13 07:13:19.921814136 +0000 UTC m=+0.077512260 container start b0a0435bdd1f44dc99c22013de3955bec34d9b791d2fc91051e41c9171b2e950 (image=quay.io/ceph/ceph:v20, name=adoring_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:13:19 np0005558317 podman[74323]: 2025-12-13 07:13:19.923145269 +0000 UTC m=+0.078843392 container attach b0a0435bdd1f44dc99c22013de3955bec34d9b791d2fc91051e41c9171b2e950 (image=quay.io/ceph/ceph:v20, name=adoring_dhawan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:13:19 np0005558317 podman[74323]: 2025-12-13 07:13:19.860122542 +0000 UTC m=+0.015820675 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:19 np0005558317 systemd[1]: libpod-b0a0435bdd1f44dc99c22013de3955bec34d9b791d2fc91051e41c9171b2e950.scope: Deactivated successfully.
Dec 13 02:13:19 np0005558317 podman[74323]: 2025-12-13 07:13:19.966902736 +0000 UTC m=+0.122600869 container died b0a0435bdd1f44dc99c22013de3955bec34d9b791d2fc91051e41c9171b2e950 (image=quay.io/ceph/ceph:v20, name=adoring_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 02:13:19 np0005558317 podman[74323]: 2025-12-13 07:13:19.985153378 +0000 UTC m=+0.140851501 container remove b0a0435bdd1f44dc99c22013de3955bec34d9b791d2fc91051e41c9171b2e950 (image=quay.io/ceph/ceph:v20, name=adoring_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:13:19 np0005558317 systemd[1]: libpod-conmon-b0a0435bdd1f44dc99c22013de3955bec34d9b791d2fc91051e41c9171b2e950.scope: Deactivated successfully.
Dec 13 02:13:20 np0005558317 systemd[1]: Reloading.
Dec 13 02:13:20 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:13:20 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:13:20 np0005558317 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 02:13:20 np0005558317 systemd[1]: Reloading.
Dec 13 02:13:20 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:13:20 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:13:20 np0005558317 systemd[1]: Reached target All Ceph clusters and services.
Dec 13 02:13:20 np0005558317 systemd[1]: Reloading.
Dec 13 02:13:20 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:13:20 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:13:20 np0005558317 systemd[1]: Reached target Ceph cluster 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 02:13:20 np0005558317 systemd[1]: Reloading.
Dec 13 02:13:20 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:13:20 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:13:20 np0005558317 systemd[1]: Reloading.
Dec 13 02:13:20 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:13:20 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:13:21 np0005558317 systemd[1]: Created slice Slice /system/ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 02:13:21 np0005558317 systemd[1]: Reached target System Time Set.
Dec 13 02:13:21 np0005558317 systemd[1]: Reached target System Time Synchronized.
Dec 13 02:13:21 np0005558317 systemd[1]: Starting Ceph mon.compute-0 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de...
Dec 13 02:13:21 np0005558317 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 02:13:21 np0005558317 podman[74604]: 2025-12-13 07:13:21.219263729 +0000 UTC m=+0.028942505 container create cf19d3e7e89772e136718bcc204235ac2fd20a43efb8d66db68ecda56cf2d7b7 (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 02:13:21 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ea2ba308b61d2765f9655418fcf47bafa1ec42002fb2e6727140332a92248d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:21 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ea2ba308b61d2765f9655418fcf47bafa1ec42002fb2e6727140332a92248d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:21 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ea2ba308b61d2765f9655418fcf47bafa1ec42002fb2e6727140332a92248d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:21 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ea2ba308b61d2765f9655418fcf47bafa1ec42002fb2e6727140332a92248d/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:21 np0005558317 podman[74604]: 2025-12-13 07:13:21.258832361 +0000 UTC m=+0.068511157 container init cf19d3e7e89772e136718bcc204235ac2fd20a43efb8d66db68ecda56cf2d7b7 (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 02:13:21 np0005558317 podman[74604]: 2025-12-13 07:13:21.263579425 +0000 UTC m=+0.073258203 container start cf19d3e7e89772e136718bcc204235ac2fd20a43efb8d66db68ecda56cf2d7b7 (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 02:13:21 np0005558317 bash[74604]: cf19d3e7e89772e136718bcc204235ac2fd20a43efb8d66db68ecda56cf2d7b7
Dec 13 02:13:21 np0005558317 podman[74604]: 2025-12-13 07:13:21.2076853 +0000 UTC m=+0.017364097 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:21 np0005558317 systemd[1]: Started Ceph mon.compute-0 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: pidfile_write: ignore empty --pid-file
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: load: jerasure load: lrc 
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: RocksDB version: 7.9.2
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: Git sha 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: DB SUMMARY
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: DB Session ID:  GNXSPATIKNS7K26A5HYA
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: CURRENT file:  CURRENT
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                         Options.error_if_exists: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                       Options.create_if_missing: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                                     Options.env: 0x55e1e6c60440
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                                Options.info_log: 0x55e1e7fddd60
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                              Options.statistics: (nil)
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                               Options.use_fsync: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                              Options.db_log_dir: 
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                                 Options.wal_dir: 
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                    Options.write_buffer_manager: 0x55e1e7fe0140
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                  Options.unordered_write: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                               Options.row_cache: None
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                              Options.wal_filter: None
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:             Options.two_write_queues: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:             Options.wal_compression: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:             Options.atomic_flush: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:             Options.max_background_jobs: 2
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:             Options.max_background_compactions: -1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:             Options.max_subcompactions: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:             Options.max_total_wal_size: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                          Options.max_open_files: -1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:       Options.compaction_readahead_size: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: Compression algorithms supported:
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: #011kZSTD supported: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: #011kXpressCompression supported: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: #011kBZip2Compression supported: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: #011kLZ4Compression supported: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: #011kZlibCompression supported: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: #011kLZ4HCCompression supported: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: #011kSnappyCompression supported: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:           Options.merge_operator: 
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:        Options.compaction_filter: None
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e1e7fdccc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e1e7fd38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:        Options.write_buffer_size: 33554432
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:  Options.max_write_buffer_number: 2
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:          Options.compression: NoCompression
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:             Options.num_levels: 7
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3758b366-8ed2-410f-a091-1c92e1b75bd7
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610001301195, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610001302199, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610001, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "GNXSPATIKNS7K26A5HYA", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610001302293, "job": 1, "event": "recovery_finished"}
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e1e7ffee00
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: DB pointer 0x55e1e814a000
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.25 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.25 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e1e7fd38d0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@-1(???) e0 preinit fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(probing) e0 win_standalone_election
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec 13 02:13:21 np0005558317 podman[74621]: 2025-12-13 07:13:21.314105266 +0000 UTC m=+0.028135116 container create 207d7cd201be0491f2987b475ac026709a7e11643a313799f8a27060fd57aa69 (image=quay.io/ceph/ceph:v20, name=hardcore_jang, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: log_channel(cluster) log [DBG] : fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: log_channel(cluster) log [DBG] : last_changed 2025-12-13T07:13:19.809500+0000
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: log_channel(cluster) log [DBG] : created 2025-12-13T07:13:19.809500+0000
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC 7763 64-Core Processor,created_at=2025-12-13T07:13:19.948653Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:04:00.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025,kernel_version=5.14.0-648.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865356,os=Linux}
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader).mds e1 new map
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2025-12-13T07:13:21:319345+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: log_channel(cluster) log [DBG] : fsmap 
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mkfs 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 13 02:13:21 np0005558317 systemd[1]: Started libpod-conmon-207d7cd201be0491f2987b475ac026709a7e11643a313799f8a27060fd57aa69.scope.
Dec 13 02:13:21 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:21 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/664f60b7b9757237563ec36a8e33f29399fd29ab2d3f7319849f1b99e6d0ba28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:21 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/664f60b7b9757237563ec36a8e33f29399fd29ab2d3f7319849f1b99e6d0ba28/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:21 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/664f60b7b9757237563ec36a8e33f29399fd29ab2d3f7319849f1b99e6d0ba28/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:21 np0005558317 podman[74621]: 2025-12-13 07:13:21.370816097 +0000 UTC m=+0.084845946 container init 207d7cd201be0491f2987b475ac026709a7e11643a313799f8a27060fd57aa69 (image=quay.io/ceph/ceph:v20, name=hardcore_jang, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 02:13:21 np0005558317 podman[74621]: 2025-12-13 07:13:21.37556204 +0000 UTC m=+0.089591889 container start 207d7cd201be0491f2987b475ac026709a7e11643a313799f8a27060fd57aa69 (image=quay.io/ceph/ceph:v20, name=hardcore_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 02:13:21 np0005558317 podman[74621]: 2025-12-13 07:13:21.376758479 +0000 UTC m=+0.090788328 container attach 207d7cd201be0491f2987b475ac026709a7e11643a313799f8a27060fd57aa69 (image=quay.io/ceph/ceph:v20, name=hardcore_jang, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:13:21 np0005558317 podman[74621]: 2025-12-13 07:13:21.303897166 +0000 UTC m=+0.017927035 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3560127887' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 13 02:13:21 np0005558317 hardcore_jang[74672]:  cluster:
Dec 13 02:13:21 np0005558317 hardcore_jang[74672]:    id:     00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 02:13:21 np0005558317 hardcore_jang[74672]:    health: HEALTH_OK
Dec 13 02:13:21 np0005558317 hardcore_jang[74672]: 
Dec 13 02:13:21 np0005558317 hardcore_jang[74672]:  services:
Dec 13 02:13:21 np0005558317 hardcore_jang[74672]:    mon: 1 daemons, quorum compute-0 (age 0.206755s) [leader: compute-0]
Dec 13 02:13:21 np0005558317 hardcore_jang[74672]:    mgr: no daemons active
Dec 13 02:13:21 np0005558317 hardcore_jang[74672]:    osd: 0 osds: 0 up, 0 in
Dec 13 02:13:21 np0005558317 hardcore_jang[74672]: 
Dec 13 02:13:21 np0005558317 hardcore_jang[74672]:  data:
Dec 13 02:13:21 np0005558317 hardcore_jang[74672]:    pools:   0 pools, 0 pgs
Dec 13 02:13:21 np0005558317 hardcore_jang[74672]:    objects: 0 objects, 0 B
Dec 13 02:13:21 np0005558317 hardcore_jang[74672]:    usage:   0 B used, 0 B / 0 B avail
Dec 13 02:13:21 np0005558317 hardcore_jang[74672]:    pgs:     
Dec 13 02:13:21 np0005558317 hardcore_jang[74672]: 
Dec 13 02:13:21 np0005558317 systemd[1]: libpod-207d7cd201be0491f2987b475ac026709a7e11643a313799f8a27060fd57aa69.scope: Deactivated successfully.
Dec 13 02:13:21 np0005558317 podman[74698]: 2025-12-13 07:13:21.568643165 +0000 UTC m=+0.018044155 container died 207d7cd201be0491f2987b475ac026709a7e11643a313799f8a27060fd57aa69 (image=quay.io/ceph/ceph:v20, name=hardcore_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 02:13:21 np0005558317 podman[74698]: 2025-12-13 07:13:21.583911471 +0000 UTC m=+0.033312461 container remove 207d7cd201be0491f2987b475ac026709a7e11643a313799f8a27060fd57aa69 (image=quay.io/ceph/ceph:v20, name=hardcore_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:13:21 np0005558317 systemd[1]: libpod-conmon-207d7cd201be0491f2987b475ac026709a7e11643a313799f8a27060fd57aa69.scope: Deactivated successfully.
Dec 13 02:13:21 np0005558317 podman[74711]: 2025-12-13 07:13:21.627903569 +0000 UTC m=+0.026149073 container create b57acc39a6a2351ad7be86559985ca692b0086f943b77417be50296f5826741c (image=quay.io/ceph/ceph:v20, name=quizzical_shannon, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:13:21 np0005558317 systemd[1]: Started libpod-conmon-b57acc39a6a2351ad7be86559985ca692b0086f943b77417be50296f5826741c.scope.
Dec 13 02:13:21 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:21 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/072e7c47e148b4ea0f2a661df1575a2e5825b9ed5b041e7b11dc8a20e9ba9dcc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:21 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/072e7c47e148b4ea0f2a661df1575a2e5825b9ed5b041e7b11dc8a20e9ba9dcc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:21 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/072e7c47e148b4ea0f2a661df1575a2e5825b9ed5b041e7b11dc8a20e9ba9dcc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:21 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/072e7c47e148b4ea0f2a661df1575a2e5825b9ed5b041e7b11dc8a20e9ba9dcc/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:21 np0005558317 podman[74711]: 2025-12-13 07:13:21.692299308 +0000 UTC m=+0.090544812 container init b57acc39a6a2351ad7be86559985ca692b0086f943b77417be50296f5826741c (image=quay.io/ceph/ceph:v20, name=quizzical_shannon, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:13:21 np0005558317 podman[74711]: 2025-12-13 07:13:21.697024562 +0000 UTC m=+0.095270056 container start b57acc39a6a2351ad7be86559985ca692b0086f943b77417be50296f5826741c (image=quay.io/ceph/ceph:v20, name=quizzical_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:13:21 np0005558317 podman[74711]: 2025-12-13 07:13:21.69826375 +0000 UTC m=+0.096509265 container attach b57acc39a6a2351ad7be86559985ca692b0086f943b77417be50296f5826741c (image=quay.io/ceph/ceph:v20, name=quizzical_shannon, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:13:21 np0005558317 podman[74711]: 2025-12-13 07:13:21.617958442 +0000 UTC m=+0.016203956 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1660714730' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 13 02:13:21 np0005558317 ceph-mon[74620]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1660714730' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 13 02:13:21 np0005558317 quizzical_shannon[74724]: 
Dec 13 02:13:21 np0005558317 quizzical_shannon[74724]: [global]
Dec 13 02:13:21 np0005558317 quizzical_shannon[74724]: #011fsid = 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 02:13:21 np0005558317 quizzical_shannon[74724]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 13 02:13:21 np0005558317 quizzical_shannon[74724]: #011osd_crush_chooseleaf_type = 0
Dec 13 02:13:21 np0005558317 systemd[1]: libpod-b57acc39a6a2351ad7be86559985ca692b0086f943b77417be50296f5826741c.scope: Deactivated successfully.
Dec 13 02:13:21 np0005558317 podman[74750]: 2025-12-13 07:13:21.882955824 +0000 UTC m=+0.016315565 container died b57acc39a6a2351ad7be86559985ca692b0086f943b77417be50296f5826741c (image=quay.io/ceph/ceph:v20, name=quizzical_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 02:13:21 np0005558317 systemd[1]: var-lib-containers-storage-overlay-072e7c47e148b4ea0f2a661df1575a2e5825b9ed5b041e7b11dc8a20e9ba9dcc-merged.mount: Deactivated successfully.
Dec 13 02:13:21 np0005558317 podman[74750]: 2025-12-13 07:13:21.901077585 +0000 UTC m=+0.034437315 container remove b57acc39a6a2351ad7be86559985ca692b0086f943b77417be50296f5826741c (image=quay.io/ceph/ceph:v20, name=quizzical_shannon, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:13:21 np0005558317 systemd[1]: libpod-conmon-b57acc39a6a2351ad7be86559985ca692b0086f943b77417be50296f5826741c.scope: Deactivated successfully.
Dec 13 02:13:21 np0005558317 podman[74761]: 2025-12-13 07:13:21.946285328 +0000 UTC m=+0.026601263 container create 1044d67893a00a4bb4c4cc1decc489c0c8238a203bc32bdbfe5e8d7f02a00be0 (image=quay.io/ceph/ceph:v20, name=priceless_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 02:13:21 np0005558317 systemd[1]: Started libpod-conmon-1044d67893a00a4bb4c4cc1decc489c0c8238a203bc32bdbfe5e8d7f02a00be0.scope.
Dec 13 02:13:21 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:21 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ed8e6f918487ab094cf746fa255a2156e4a789bfb755a5931a4a5f4b3a440c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:21 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ed8e6f918487ab094cf746fa255a2156e4a789bfb755a5931a4a5f4b3a440c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:21 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ed8e6f918487ab094cf746fa255a2156e4a789bfb755a5931a4a5f4b3a440c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:21 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ed8e6f918487ab094cf746fa255a2156e4a789bfb755a5931a4a5f4b3a440c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:21 np0005558317 podman[74761]: 2025-12-13 07:13:21.99116264 +0000 UTC m=+0.071478595 container init 1044d67893a00a4bb4c4cc1decc489c0c8238a203bc32bdbfe5e8d7f02a00be0 (image=quay.io/ceph/ceph:v20, name=priceless_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:13:21 np0005558317 podman[74761]: 2025-12-13 07:13:21.998224758 +0000 UTC m=+0.078540693 container start 1044d67893a00a4bb4c4cc1decc489c0c8238a203bc32bdbfe5e8d7f02a00be0 (image=quay.io/ceph/ceph:v20, name=priceless_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 02:13:21 np0005558317 podman[74761]: 2025-12-13 07:13:21.999731501 +0000 UTC m=+0.080047436 container attach 1044d67893a00a4bb4c4cc1decc489c0c8238a203bc32bdbfe5e8d7f02a00be0 (image=quay.io/ceph/ceph:v20, name=priceless_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 02:13:22 np0005558317 podman[74761]: 2025-12-13 07:13:21.936533874 +0000 UTC m=+0.016849830 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:22 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:13:22 np0005558317 ceph-mon[74620]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/153588289' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:13:22 np0005558317 systemd[1]: libpod-1044d67893a00a4bb4c4cc1decc489c0c8238a203bc32bdbfe5e8d7f02a00be0.scope: Deactivated successfully.
Dec 13 02:13:22 np0005558317 podman[74761]: 2025-12-13 07:13:22.154952275 +0000 UTC m=+0.235268201 container died 1044d67893a00a4bb4c4cc1decc489c0c8238a203bc32bdbfe5e8d7f02a00be0 (image=quay.io/ceph/ceph:v20, name=priceless_ptolemy, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:13:22 np0005558317 systemd[1]: var-lib-containers-storage-overlay-d9ed8e6f918487ab094cf746fa255a2156e4a789bfb755a5931a4a5f4b3a440c-merged.mount: Deactivated successfully.
Dec 13 02:13:22 np0005558317 podman[74761]: 2025-12-13 07:13:22.17715061 +0000 UTC m=+0.257466546 container remove 1044d67893a00a4bb4c4cc1decc489c0c8238a203bc32bdbfe5e8d7f02a00be0 (image=quay.io/ceph/ceph:v20, name=priceless_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 02:13:22 np0005558317 systemd[1]: libpod-conmon-1044d67893a00a4bb4c4cc1decc489c0c8238a203bc32bdbfe5e8d7f02a00be0.scope: Deactivated successfully.
Dec 13 02:13:22 np0005558317 systemd[1]: Stopping Ceph mon.compute-0 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de...
Dec 13 02:13:22 np0005558317 ceph-mon[74620]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 13 02:13:22 np0005558317 ceph-mon[74620]: mon.compute-0@0(leader) e1 shutdown
Dec 13 02:13:22 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0[74616]: 2025-12-13T07:13:22.298+0000 7fb76168f640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 13 02:13:22 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0[74616]: 2025-12-13T07:13:22.298+0000 7fb76168f640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 13 02:13:22 np0005558317 ceph-mon[74620]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 13 02:13:22 np0005558317 ceph-mon[74620]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 13 02:13:22 np0005558317 podman[74833]: 2025-12-13 07:13:22.35133029 +0000 UTC m=+0.071721423 container died cf19d3e7e89772e136718bcc204235ac2fd20a43efb8d66db68ecda56cf2d7b7 (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 02:13:22 np0005558317 systemd[1]: var-lib-containers-storage-overlay-00ea2ba308b61d2765f9655418fcf47bafa1ec42002fb2e6727140332a92248d-merged.mount: Deactivated successfully.
Dec 13 02:13:22 np0005558317 podman[74833]: 2025-12-13 07:13:22.367197802 +0000 UTC m=+0.087588934 container remove cf19d3e7e89772e136718bcc204235ac2fd20a43efb8d66db68ecda56cf2d7b7 (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 02:13:22 np0005558317 bash[74833]: ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0
Dec 13 02:13:22 np0005558317 systemd[1]: ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de@mon.compute-0.service: Deactivated successfully.
Dec 13 02:13:22 np0005558317 systemd[1]: Stopped Ceph mon.compute-0 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 02:13:22 np0005558317 systemd[1]: Starting Ceph mon.compute-0 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de...
Dec 13 02:13:22 np0005558317 podman[74912]: 2025-12-13 07:13:22.591811841 +0000 UTC m=+0.025433488 container create 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:13:22 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de23cd3ce62c8812bfbedb19351e447a14c16ae4f654415c23d9eadaf14158e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:22 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de23cd3ce62c8812bfbedb19351e447a14c16ae4f654415c23d9eadaf14158e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:22 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de23cd3ce62c8812bfbedb19351e447a14c16ae4f654415c23d9eadaf14158e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:22 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de23cd3ce62c8812bfbedb19351e447a14c16ae4f654415c23d9eadaf14158e/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:22 np0005558317 podman[74912]: 2025-12-13 07:13:22.634746391 +0000 UTC m=+0.068368048 container init 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 02:13:22 np0005558317 podman[74912]: 2025-12-13 07:13:22.639197369 +0000 UTC m=+0.072819016 container start 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:13:22 np0005558317 bash[74912]: 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a
Dec 13 02:13:22 np0005558317 podman[74912]: 2025-12-13 07:13:22.580805759 +0000 UTC m=+0.014427416 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:22 np0005558317 systemd[1]: Started Ceph mon.compute-0 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: pidfile_write: ignore empty --pid-file
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: load: jerasure load: lrc 
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: RocksDB version: 7.9.2
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: Git sha 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: DB SUMMARY
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: DB Session ID:  1EYF1QT48HSM3ZBGDMBQ
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: CURRENT file:  CURRENT
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 48303 ; 
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                         Options.error_if_exists: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                       Options.create_if_missing: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                                     Options.env: 0x5642b92b0440
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                                Options.info_log: 0x5642ba2a2000
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                              Options.statistics: (nil)
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                               Options.use_fsync: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                              Options.db_log_dir: 
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                                 Options.wal_dir: 
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                    Options.write_buffer_manager: 0x5642ba296140
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                  Options.unordered_write: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                               Options.row_cache: None
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                              Options.wal_filter: None
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:             Options.two_write_queues: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:             Options.wal_compression: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:             Options.atomic_flush: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:             Options.max_background_jobs: 2
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:             Options.max_background_compactions: -1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:             Options.max_subcompactions: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:             Options.max_total_wal_size: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                          Options.max_open_files: -1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:       Options.compaction_readahead_size: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: Compression algorithms supported:
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: #011kZSTD supported: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: #011kXpressCompression supported: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: #011kBZip2Compression supported: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: #011kLZ4Compression supported: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: #011kZlibCompression supported: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: #011kLZ4HCCompression supported: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: #011kSnappyCompression supported: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:           Options.merge_operator: 
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:        Options.compaction_filter: None
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5642ba293a80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5642ba289a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:        Options.write_buffer_size: 33554432
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:  Options.max_write_buffer_number: 2
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:          Options.compression: NoCompression
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:             Options.num_levels: 7
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3758b366-8ed2-410f-a091-1c92e1b75bd7
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610002671164, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610002672467, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 48181, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 123, "table_properties": {"data_size": 46755, "index_size": 132, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2974, "raw_average_key_size": 31, "raw_value_size": 44371, "raw_average_value_size": 472, "num_data_blocks": 7, "num_entries": 94, "num_filter_entries": 94, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610002, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610002672563, "job": 1, "event": "recovery_finished"}
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5642ba2b4e00
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: DB pointer 0x5642ba406000
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   48.95 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     42.7      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Sum      2/0   48.95 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     42.7      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     42.7      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     42.7      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 8.64 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 8.64 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5642ba289a30#2 capacity: 512.00 MB usage: 0.75 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.33 KB,6.25849e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: mon.compute-0@-1(???) e1 preinit fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: mon.compute-0@-1(???).mds e1 new map
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2025-12-13T07:13:21:319345+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : last_changed 2025-12-13T07:13:19.809500+0000
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : created 2025-12-13T07:13:19.809500+0000
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : fsmap 
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 13 02:13:22 np0005558317 podman[74929]: 2025-12-13 07:13:22.685495322 +0000 UTC m=+0.027740615 container create 8e9101a52e46eb88d40b0cf74ce8b6f73510841513095c2f339f715921c2d647 (image=quay.io/ceph/ceph:v20, name=affectionate_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:13:22 np0005558317 systemd[1]: Started libpod-conmon-8e9101a52e46eb88d40b0cf74ce8b6f73510841513095c2f339f715921c2d647.scope.
Dec 13 02:13:22 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:22 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad06c9ead092b5c47c357fbd10a498694e68aeac8c0f74a34de693527fe47764/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 13 02:13:22 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad06c9ead092b5c47c357fbd10a498694e68aeac8c0f74a34de693527fe47764/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:22 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad06c9ead092b5c47c357fbd10a498694e68aeac8c0f74a34de693527fe47764/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:22 np0005558317 podman[74929]: 2025-12-13 07:13:22.747513541 +0000 UTC m=+0.089758833 container init 8e9101a52e46eb88d40b0cf74ce8b6f73510841513095c2f339f715921c2d647 (image=quay.io/ceph/ceph:v20, name=affectionate_shannon, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 02:13:22 np0005558317 podman[74929]: 2025-12-13 07:13:22.75284786 +0000 UTC m=+0.095093153 container start 8e9101a52e46eb88d40b0cf74ce8b6f73510841513095c2f339f715921c2d647 (image=quay.io/ceph/ceph:v20, name=affectionate_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Dec 13 02:13:22 np0005558317 podman[74929]: 2025-12-13 07:13:22.754040221 +0000 UTC m=+0.096285514 container attach 8e9101a52e46eb88d40b0cf74ce8b6f73510841513095c2f339f715921c2d647 (image=quay.io/ceph/ceph:v20, name=affectionate_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:13:22 np0005558317 podman[74929]: 2025-12-13 07:13:22.675425731 +0000 UTC m=+0.017671044 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Dec 13 02:13:22 np0005558317 systemd[1]: libpod-8e9101a52e46eb88d40b0cf74ce8b6f73510841513095c2f339f715921c2d647.scope: Deactivated successfully.
Dec 13 02:13:22 np0005558317 conmon[74980]: conmon 8e9101a52e46eb88d40b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e9101a52e46eb88d40b0cf74ce8b6f73510841513095c2f339f715921c2d647.scope/container/memory.events
Dec 13 02:13:22 np0005558317 podman[75006]: 2025-12-13 07:13:22.93955994 +0000 UTC m=+0.015565705 container died 8e9101a52e46eb88d40b0cf74ce8b6f73510841513095c2f339f715921c2d647 (image=quay.io/ceph/ceph:v20, name=affectionate_shannon, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 02:13:22 np0005558317 systemd[1]: var-lib-containers-storage-overlay-ad06c9ead092b5c47c357fbd10a498694e68aeac8c0f74a34de693527fe47764-merged.mount: Deactivated successfully.
Dec 13 02:13:22 np0005558317 podman[75006]: 2025-12-13 07:13:22.955252544 +0000 UTC m=+0.031258289 container remove 8e9101a52e46eb88d40b0cf74ce8b6f73510841513095c2f339f715921c2d647 (image=quay.io/ceph/ceph:v20, name=affectionate_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:13:22 np0005558317 systemd[1]: libpod-conmon-8e9101a52e46eb88d40b0cf74ce8b6f73510841513095c2f339f715921c2d647.scope: Deactivated successfully.
Dec 13 02:13:22 np0005558317 podman[75017]: 2025-12-13 07:13:22.996790759 +0000 UTC m=+0.024430843 container create 111135779fa8d2a5a5891cb252129bf40185748571adc3e93f282101d91b4d20 (image=quay.io/ceph/ceph:v20, name=zen_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 02:13:23 np0005558317 systemd[1]: Started libpod-conmon-111135779fa8d2a5a5891cb252129bf40185748571adc3e93f282101d91b4d20.scope.
Dec 13 02:13:23 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:23 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2c0eec1db992a251119fa9c29f35c62f8e3e0f17a50839b30e7eb548846f306/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:23 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2c0eec1db992a251119fa9c29f35c62f8e3e0f17a50839b30e7eb548846f306/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:23 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2c0eec1db992a251119fa9c29f35c62f8e3e0f17a50839b30e7eb548846f306/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:23 np0005558317 podman[75017]: 2025-12-13 07:13:23.046964799 +0000 UTC m=+0.074604873 container init 111135779fa8d2a5a5891cb252129bf40185748571adc3e93f282101d91b4d20 (image=quay.io/ceph/ceph:v20, name=zen_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:13:23 np0005558317 podman[75017]: 2025-12-13 07:13:23.05114594 +0000 UTC m=+0.078786014 container start 111135779fa8d2a5a5891cb252129bf40185748571adc3e93f282101d91b4d20 (image=quay.io/ceph/ceph:v20, name=zen_einstein, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 02:13:23 np0005558317 podman[75017]: 2025-12-13 07:13:23.052277728 +0000 UTC m=+0.079917802 container attach 111135779fa8d2a5a5891cb252129bf40185748571adc3e93f282101d91b4d20 (image=quay.io/ceph/ceph:v20, name=zen_einstein, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 02:13:23 np0005558317 podman[75017]: 2025-12-13 07:13:22.987106491 +0000 UTC m=+0.014746575 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:23 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Dec 13 02:13:23 np0005558317 systemd[1]: libpod-111135779fa8d2a5a5891cb252129bf40185748571adc3e93f282101d91b4d20.scope: Deactivated successfully.
Dec 13 02:13:23 np0005558317 podman[75017]: 2025-12-13 07:13:23.203381844 +0000 UTC m=+0.231021918 container died 111135779fa8d2a5a5891cb252129bf40185748571adc3e93f282101d91b4d20 (image=quay.io/ceph/ceph:v20, name=zen_einstein, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:13:23 np0005558317 systemd[1]: var-lib-containers-storage-overlay-b2c0eec1db992a251119fa9c29f35c62f8e3e0f17a50839b30e7eb548846f306-merged.mount: Deactivated successfully.
Dec 13 02:13:23 np0005558317 podman[75017]: 2025-12-13 07:13:23.220914266 +0000 UTC m=+0.248554330 container remove 111135779fa8d2a5a5891cb252129bf40185748571adc3e93f282101d91b4d20 (image=quay.io/ceph/ceph:v20, name=zen_einstein, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 02:13:23 np0005558317 systemd[1]: libpod-conmon-111135779fa8d2a5a5891cb252129bf40185748571adc3e93f282101d91b4d20.scope: Deactivated successfully.
Dec 13 02:13:23 np0005558317 systemd[1]: Reloading.
Dec 13 02:13:23 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:13:23 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:13:23 np0005558317 systemd[1]: Reloading.
Dec 13 02:13:23 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:13:23 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:13:23 np0005558317 systemd[1]: Starting Ceph mgr.compute-0.qsherl for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de...
Dec 13 02:13:23 np0005558317 podman[75184]: 2025-12-13 07:13:23.767603699 +0000 UTC m=+0.026056689 container create 4d78867918d5dd4dba36f3a6dc6db4122866221ae6fbf48a37819c5ae84e8283 (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 02:13:23 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/346fc788cab02aea4507e4bda75119dcdb6967076b2f735cd53af7434813aca9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:23 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/346fc788cab02aea4507e4bda75119dcdb6967076b2f735cd53af7434813aca9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:23 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/346fc788cab02aea4507e4bda75119dcdb6967076b2f735cd53af7434813aca9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:23 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/346fc788cab02aea4507e4bda75119dcdb6967076b2f735cd53af7434813aca9/merged/var/lib/ceph/mgr/ceph-compute-0.qsherl supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:23 np0005558317 podman[75184]: 2025-12-13 07:13:23.806732413 +0000 UTC m=+0.065185413 container init 4d78867918d5dd4dba36f3a6dc6db4122866221ae6fbf48a37819c5ae84e8283 (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:13:23 np0005558317 podman[75184]: 2025-12-13 07:13:23.811110995 +0000 UTC m=+0.069563985 container start 4d78867918d5dd4dba36f3a6dc6db4122866221ae6fbf48a37819c5ae84e8283 (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True)
Dec 13 02:13:23 np0005558317 bash[75184]: 4d78867918d5dd4dba36f3a6dc6db4122866221ae6fbf48a37819c5ae84e8283
Dec 13 02:13:23 np0005558317 podman[75184]: 2025-12-13 07:13:23.756728462 +0000 UTC m=+0.015181463 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:23 np0005558317 systemd[1]: Started Ceph mgr.compute-0.qsherl for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 02:13:23 np0005558317 ceph-mgr[75200]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 02:13:23 np0005558317 ceph-mgr[75200]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec 13 02:13:23 np0005558317 ceph-mgr[75200]: pidfile_write: ignore empty --pid-file
Dec 13 02:13:23 np0005558317 podman[75201]: 2025-12-13 07:13:23.8611115 +0000 UTC m=+0.028950380 container create 9a02dd76d1d173632c97c7c720ca7bebed44ec2a5d63b0d230e40b2dba92d56e (image=quay.io/ceph/ceph:v20, name=quizzical_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 02:13:23 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'alerts'
Dec 13 02:13:23 np0005558317 systemd[1]: Started libpod-conmon-9a02dd76d1d173632c97c7c720ca7bebed44ec2a5d63b0d230e40b2dba92d56e.scope.
Dec 13 02:13:23 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:23 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d609d25c4037f130793e222aa764b0d016d848c851a41fcedab4a7141a583e9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:23 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d609d25c4037f130793e222aa764b0d016d848c851a41fcedab4a7141a583e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:23 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d609d25c4037f130793e222aa764b0d016d848c851a41fcedab4a7141a583e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:23 np0005558317 podman[75201]: 2025-12-13 07:13:23.920912039 +0000 UTC m=+0.088750929 container init 9a02dd76d1d173632c97c7c720ca7bebed44ec2a5d63b0d230e40b2dba92d56e (image=quay.io/ceph/ceph:v20, name=quizzical_joliot, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 02:13:23 np0005558317 podman[75201]: 2025-12-13 07:13:23.926493712 +0000 UTC m=+0.094332593 container start 9a02dd76d1d173632c97c7c720ca7bebed44ec2a5d63b0d230e40b2dba92d56e (image=quay.io/ceph/ceph:v20, name=quizzical_joliot, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 02:13:23 np0005558317 podman[75201]: 2025-12-13 07:13:23.928280762 +0000 UTC m=+0.096119662 container attach 9a02dd76d1d173632c97c7c720ca7bebed44ec2a5d63b0d230e40b2dba92d56e (image=quay.io/ceph/ceph:v20, name=quizzical_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 02:13:23 np0005558317 podman[75201]: 2025-12-13 07:13:23.850056445 +0000 UTC m=+0.017895335 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:23 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'balancer'
Dec 13 02:13:24 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'cephadm'
Dec 13 02:13:24 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 13 02:13:24 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3310964443' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]: 
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]: {
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:    "fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:    "health": {
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "status": "HEALTH_OK",
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "checks": {},
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "mutes": []
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:    },
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:    "election_epoch": 5,
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:    "quorum": [
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        0
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:    ],
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:    "quorum_names": [
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "compute-0"
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:    ],
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:    "quorum_age": 1,
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:    "monmap": {
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "epoch": 1,
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "min_mon_release_name": "tentacle",
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "num_mons": 1
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:    },
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:    "osdmap": {
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "epoch": 1,
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "num_osds": 0,
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "num_up_osds": 0,
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "osd_up_since": 0,
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "num_in_osds": 0,
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "osd_in_since": 0,
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "num_remapped_pgs": 0
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:    },
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:    "pgmap": {
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "pgs_by_state": [],
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "num_pgs": 0,
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "num_pools": 0,
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "num_objects": 0,
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "data_bytes": 0,
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "bytes_used": 0,
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "bytes_avail": 0,
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "bytes_total": 0
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:    },
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:    "fsmap": {
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "epoch": 1,
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "btime": "2025-12-13T07:13:21:319345+0000",
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "by_rank": [],
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "up:standby": 0
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:    },
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:    "mgrmap": {
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "available": false,
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "num_standbys": 0,
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "modules": [
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:            "iostat",
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:            "nfs"
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        ],
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "services": {}
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:    },
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:    "servicemap": {
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "epoch": 1,
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "modified": "2025-12-13T07:13:21.320643+0000",
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:        "services": {}
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:    },
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]:    "progress_events": {}
Dec 13 02:13:24 np0005558317 quizzical_joliot[75234]: }
Dec 13 02:13:24 np0005558317 systemd[1]: libpod-9a02dd76d1d173632c97c7c720ca7bebed44ec2a5d63b0d230e40b2dba92d56e.scope: Deactivated successfully.
Dec 13 02:13:24 np0005558317 podman[75201]: 2025-12-13 07:13:24.089355785 +0000 UTC m=+0.257194665 container died 9a02dd76d1d173632c97c7c720ca7bebed44ec2a5d63b0d230e40b2dba92d56e (image=quay.io/ceph/ceph:v20, name=quizzical_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Dec 13 02:13:24 np0005558317 systemd[1]: var-lib-containers-storage-overlay-0d609d25c4037f130793e222aa764b0d016d848c851a41fcedab4a7141a583e9-merged.mount: Deactivated successfully.
Dec 13 02:13:24 np0005558317 podman[75201]: 2025-12-13 07:13:24.123822975 +0000 UTC m=+0.291661854 container remove 9a02dd76d1d173632c97c7c720ca7bebed44ec2a5d63b0d230e40b2dba92d56e (image=quay.io/ceph/ceph:v20, name=quizzical_joliot, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 02:13:24 np0005558317 systemd[1]: libpod-conmon-9a02dd76d1d173632c97c7c720ca7bebed44ec2a5d63b0d230e40b2dba92d56e.scope: Deactivated successfully.
Dec 13 02:13:24 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'crash'
Dec 13 02:13:24 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'dashboard'
Dec 13 02:13:25 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'devicehealth'
Dec 13 02:13:25 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'diskprediction_local'
Dec 13 02:13:25 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl[75196]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 13 02:13:25 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl[75196]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 13 02:13:25 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl[75196]:  from numpy import show_config as show_numpy_config
Dec 13 02:13:25 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'influx'
Dec 13 02:13:25 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'insights'
Dec 13 02:13:25 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'iostat'
Dec 13 02:13:25 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'k8sevents'
Dec 13 02:13:26 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'localpool'
Dec 13 02:13:26 np0005558317 podman[75280]: 2025-12-13 07:13:26.166920586 +0000 UTC m=+0.026940862 container create d2c1cb2be971b812a45f5a2eb25b5cb98ec7d10becff10688f79f22c479546b0 (image=quay.io/ceph/ceph:v20, name=ecstatic_mendel, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:13:26 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'mds_autoscaler'
Dec 13 02:13:26 np0005558317 systemd[1]: Started libpod-conmon-d2c1cb2be971b812a45f5a2eb25b5cb98ec7d10becff10688f79f22c479546b0.scope.
Dec 13 02:13:26 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:26 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58bdb0b39b4002c8f374d231d46267aa5ba90adaa9e2c5dc90178eab7d1e6857/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:26 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58bdb0b39b4002c8f374d231d46267aa5ba90adaa9e2c5dc90178eab7d1e6857/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:26 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58bdb0b39b4002c8f374d231d46267aa5ba90adaa9e2c5dc90178eab7d1e6857/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:26 np0005558317 podman[75280]: 2025-12-13 07:13:26.223948392 +0000 UTC m=+0.083968687 container init d2c1cb2be971b812a45f5a2eb25b5cb98ec7d10becff10688f79f22c479546b0 (image=quay.io/ceph/ceph:v20, name=ecstatic_mendel, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 02:13:26 np0005558317 podman[75280]: 2025-12-13 07:13:26.228185147 +0000 UTC m=+0.088205423 container start d2c1cb2be971b812a45f5a2eb25b5cb98ec7d10becff10688f79f22c479546b0 (image=quay.io/ceph/ceph:v20, name=ecstatic_mendel, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:13:26 np0005558317 podman[75280]: 2025-12-13 07:13:26.230543641 +0000 UTC m=+0.090563917 container attach d2c1cb2be971b812a45f5a2eb25b5cb98ec7d10becff10688f79f22c479546b0 (image=quay.io/ceph/ceph:v20, name=ecstatic_mendel, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 02:13:26 np0005558317 podman[75280]: 2025-12-13 07:13:26.155887112 +0000 UTC m=+0.015907407 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:26 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 13 02:13:26 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3693822294' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]: 
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]: {
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:    "fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:    "health": {
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "status": "HEALTH_OK",
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "checks": {},
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "mutes": []
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:    },
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:    "election_epoch": 5,
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:    "quorum": [
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        0
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:    ],
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:    "quorum_names": [
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "compute-0"
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:    ],
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:    "quorum_age": 3,
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:    "monmap": {
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "epoch": 1,
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "min_mon_release_name": "tentacle",
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "num_mons": 1
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:    },
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:    "osdmap": {
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "epoch": 1,
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "num_osds": 0,
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "num_up_osds": 0,
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "osd_up_since": 0,
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "num_in_osds": 0,
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "osd_in_since": 0,
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "num_remapped_pgs": 0
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:    },
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:    "pgmap": {
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "pgs_by_state": [],
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "num_pgs": 0,
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "num_pools": 0,
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "num_objects": 0,
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "data_bytes": 0,
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "bytes_used": 0,
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "bytes_avail": 0,
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "bytes_total": 0
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:    },
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:    "fsmap": {
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "epoch": 1,
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "btime": "2025-12-13T07:13:21:319345+0000",
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "by_rank": [],
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "up:standby": 0
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:    },
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:    "mgrmap": {
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "available": false,
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "num_standbys": 0,
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "modules": [
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:            "iostat",
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:            "nfs"
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        ],
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "services": {}
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:    },
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:    "servicemap": {
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "epoch": 1,
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "modified": "2025-12-13T07:13:21.320643+0000",
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:        "services": {}
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:    },
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]:    "progress_events": {}
Dec 13 02:13:26 np0005558317 ecstatic_mendel[75293]: }
Dec 13 02:13:26 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'mirroring'
Dec 13 02:13:26 np0005558317 systemd[1]: libpod-d2c1cb2be971b812a45f5a2eb25b5cb98ec7d10becff10688f79f22c479546b0.scope: Deactivated successfully.
Dec 13 02:13:26 np0005558317 conmon[75293]: conmon d2c1cb2be971b812a45f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d2c1cb2be971b812a45f5a2eb25b5cb98ec7d10becff10688f79f22c479546b0.scope/container/memory.events
Dec 13 02:13:26 np0005558317 podman[75280]: 2025-12-13 07:13:26.38864384 +0000 UTC m=+0.248664126 container died d2c1cb2be971b812a45f5a2eb25b5cb98ec7d10becff10688f79f22c479546b0 (image=quay.io/ceph/ceph:v20, name=ecstatic_mendel, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:13:26 np0005558317 systemd[1]: var-lib-containers-storage-overlay-58bdb0b39b4002c8f374d231d46267aa5ba90adaa9e2c5dc90178eab7d1e6857-merged.mount: Deactivated successfully.
Dec 13 02:13:26 np0005558317 podman[75280]: 2025-12-13 07:13:26.413208122 +0000 UTC m=+0.273228398 container remove d2c1cb2be971b812a45f5a2eb25b5cb98ec7d10becff10688f79f22c479546b0 (image=quay.io/ceph/ceph:v20, name=ecstatic_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 02:13:26 np0005558317 systemd[1]: libpod-conmon-d2c1cb2be971b812a45f5a2eb25b5cb98ec7d10becff10688f79f22c479546b0.scope: Deactivated successfully.
Dec 13 02:13:26 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'nfs'
Dec 13 02:13:26 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'orchestrator'
Dec 13 02:13:26 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'osd_perf_query'
Dec 13 02:13:26 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'osd_support'
Dec 13 02:13:27 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'pg_autoscaler'
Dec 13 02:13:27 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'progress'
Dec 13 02:13:27 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'prometheus'
Dec 13 02:13:27 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'rbd_support'
Dec 13 02:13:27 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'rgw'
Dec 13 02:13:27 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'rook'
Dec 13 02:13:28 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'selftest'
Dec 13 02:13:28 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'smb'
Dec 13 02:13:28 np0005558317 podman[75330]: 2025-12-13 07:13:28.453961726 +0000 UTC m=+0.024374607 container create 0c4eb6ba816b0ac6c0a6abe5f734bb712d08402fe900dc31507d44ff44ea486b (image=quay.io/ceph/ceph:v20, name=blissful_engelbart, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 02:13:28 np0005558317 systemd[1]: Started libpod-conmon-0c4eb6ba816b0ac6c0a6abe5f734bb712d08402fe900dc31507d44ff44ea486b.scope.
Dec 13 02:13:28 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:28 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4b1b4ccf953389f78d1c457320fd2224512e7a3c5bcc91f1b9960a2a279941/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:28 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4b1b4ccf953389f78d1c457320fd2224512e7a3c5bcc91f1b9960a2a279941/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:28 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4b1b4ccf953389f78d1c457320fd2224512e7a3c5bcc91f1b9960a2a279941/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:28 np0005558317 podman[75330]: 2025-12-13 07:13:28.513207302 +0000 UTC m=+0.083620173 container init 0c4eb6ba816b0ac6c0a6abe5f734bb712d08402fe900dc31507d44ff44ea486b (image=quay.io/ceph/ceph:v20, name=blissful_engelbart, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 02:13:28 np0005558317 podman[75330]: 2025-12-13 07:13:28.517178378 +0000 UTC m=+0.087591248 container start 0c4eb6ba816b0ac6c0a6abe5f734bb712d08402fe900dc31507d44ff44ea486b (image=quay.io/ceph/ceph:v20, name=blissful_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:13:28 np0005558317 podman[75330]: 2025-12-13 07:13:28.518206331 +0000 UTC m=+0.088619202 container attach 0c4eb6ba816b0ac6c0a6abe5f734bb712d08402fe900dc31507d44ff44ea486b (image=quay.io/ceph/ceph:v20, name=blissful_engelbart, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 02:13:28 np0005558317 podman[75330]: 2025-12-13 07:13:28.444764244 +0000 UTC m=+0.015177135 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:28 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'snap_schedule'
Dec 13 02:13:28 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 13 02:13:28 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3952436456' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]: 
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]: {
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:    "fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:    "health": {
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "status": "HEALTH_OK",
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "checks": {},
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "mutes": []
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:    },
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:    "election_epoch": 5,
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:    "quorum": [
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        0
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:    ],
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:    "quorum_names": [
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "compute-0"
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:    ],
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:    "quorum_age": 5,
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:    "monmap": {
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "epoch": 1,
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "min_mon_release_name": "tentacle",
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "num_mons": 1
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:    },
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:    "osdmap": {
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "epoch": 1,
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "num_osds": 0,
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "num_up_osds": 0,
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "osd_up_since": 0,
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "num_in_osds": 0,
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "osd_in_since": 0,
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "num_remapped_pgs": 0
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:    },
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:    "pgmap": {
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "pgs_by_state": [],
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "num_pgs": 0,
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "num_pools": 0,
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "num_objects": 0,
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "data_bytes": 0,
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "bytes_used": 0,
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "bytes_avail": 0,
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "bytes_total": 0
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:    },
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:    "fsmap": {
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "epoch": 1,
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "btime": "2025-12-13T07:13:21:319345+0000",
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "by_rank": [],
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "up:standby": 0
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:    },
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:    "mgrmap": {
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "available": false,
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "num_standbys": 0,
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "modules": [
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:            "iostat",
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:            "nfs"
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        ],
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "services": {}
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:    },
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:    "servicemap": {
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "epoch": 1,
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "modified": "2025-12-13T07:13:21.320643+0000",
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:        "services": {}
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:    },
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]:    "progress_events": {}
Dec 13 02:13:28 np0005558317 blissful_engelbart[75343]: }
Dec 13 02:13:28 np0005558317 systemd[1]: libpod-0c4eb6ba816b0ac6c0a6abe5f734bb712d08402fe900dc31507d44ff44ea486b.scope: Deactivated successfully.
Dec 13 02:13:28 np0005558317 podman[75330]: 2025-12-13 07:13:28.670251916 +0000 UTC m=+0.240664787 container died 0c4eb6ba816b0ac6c0a6abe5f734bb712d08402fe900dc31507d44ff44ea486b (image=quay.io/ceph/ceph:v20, name=blissful_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 02:13:28 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'stats'
Dec 13 02:13:28 np0005558317 systemd[1]: var-lib-containers-storage-overlay-fd4b1b4ccf953389f78d1c457320fd2224512e7a3c5bcc91f1b9960a2a279941-merged.mount: Deactivated successfully.
Dec 13 02:13:28 np0005558317 podman[75330]: 2025-12-13 07:13:28.68854635 +0000 UTC m=+0.258959222 container remove 0c4eb6ba816b0ac6c0a6abe5f734bb712d08402fe900dc31507d44ff44ea486b (image=quay.io/ceph/ceph:v20, name=blissful_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 02:13:28 np0005558317 systemd[1]: libpod-conmon-0c4eb6ba816b0ac6c0a6abe5f734bb712d08402fe900dc31507d44ff44ea486b.scope: Deactivated successfully.
Dec 13 02:13:28 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'status'
Dec 13 02:13:28 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'telegraf'
Dec 13 02:13:28 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'telemetry'
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'test_orchestrator'
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'volumes'
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: ms_deliver_dispatch: unhandled message 0x5571dc4d9860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.qsherl
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: mgr handle_mgr_map Activating!
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: mgr handle_mgr_map I am now activating
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.qsherl(active, starting, since 0.00483231s)
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' cmd={"prefix": "mds metadata"} : dispatch
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).mds e1 all = 1
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata"} : dispatch
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' cmd={"prefix": "mon metadata"} : dispatch
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.qsherl", "id": "compute-0.qsherl"} v 0)
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' cmd={"prefix": "mgr metadata", "who": "compute-0.qsherl", "id": "compute-0.qsherl"} : dispatch
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: log_channel(cluster) log [INF] : Manager daemon compute-0.qsherl is now available
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: mgr load Constructed class from module: balancer
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [balancer INFO root] Starting
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: mgr load Constructed class from module: crash
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:13:29
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [balancer INFO root] No pools available
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: mgr load Constructed class from module: devicehealth
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [devicehealth INFO root] Starting
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: mgr load Constructed class from module: iostat
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: mgr load Constructed class from module: nfs
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: mgr load Constructed class from module: orchestrator
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: mgr load Constructed class from module: pg_autoscaler
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: mgr load Constructed class from module: progress
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [progress INFO root] Loading...
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [progress INFO root] No stored events to load
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [progress INFO root] Loaded [] historic events
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [progress INFO root] Loaded OSDMap, ready.
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] recovery thread starting
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] starting setup
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: mgr load Constructed class from module: rbd_support
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: mgr load Constructed class from module: status
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qsherl/mirror_snapshot_schedule"} v 0)
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qsherl/mirror_snapshot_schedule"} : dispatch
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: mgr load Constructed class from module: telemetry
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] PerfHandler: starting
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TaskHandler: starting
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qsherl/trash_purge_schedule"} v 0)
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qsherl/trash_purge_schedule"} : dispatch
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] setup complete
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:29 np0005558317 ceph-mgr[75200]: mgr load Constructed class from module: volumes
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: Activating manager daemon compute-0.qsherl
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: Manager daemon compute-0.qsherl is now available
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qsherl/mirror_snapshot_schedule"} : dispatch
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qsherl/trash_purge_schedule"} : dispatch
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:29 np0005558317 ceph-mon[74928]: from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:30 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.qsherl(active, since 1.00935s)
Dec 13 02:13:30 np0005558317 podman[75457]: 2025-12-13 07:13:30.734518707 +0000 UTC m=+0.028824262 container create 62ddf3c3ca79cb43a49efbf89a7992724d93a071168c5defde18cd99a1299453 (image=quay.io/ceph/ceph:v20, name=busy_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 02:13:30 np0005558317 systemd[1]: Started libpod-conmon-62ddf3c3ca79cb43a49efbf89a7992724d93a071168c5defde18cd99a1299453.scope.
Dec 13 02:13:30 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:30 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98036d5ff6ff5a5ea36863de1b3deffd92dd0110f6946c158a80b7e37a871da2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:30 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98036d5ff6ff5a5ea36863de1b3deffd92dd0110f6946c158a80b7e37a871da2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:30 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98036d5ff6ff5a5ea36863de1b3deffd92dd0110f6946c158a80b7e37a871da2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:30 np0005558317 podman[75457]: 2025-12-13 07:13:30.775730037 +0000 UTC m=+0.070035582 container init 62ddf3c3ca79cb43a49efbf89a7992724d93a071168c5defde18cd99a1299453 (image=quay.io/ceph/ceph:v20, name=busy_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:13:30 np0005558317 podman[75457]: 2025-12-13 07:13:30.779679072 +0000 UTC m=+0.073984616 container start 62ddf3c3ca79cb43a49efbf89a7992724d93a071168c5defde18cd99a1299453 (image=quay.io/ceph/ceph:v20, name=busy_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:13:30 np0005558317 podman[75457]: 2025-12-13 07:13:30.780759472 +0000 UTC m=+0.075065017 container attach 62ddf3c3ca79cb43a49efbf89a7992724d93a071168c5defde18cd99a1299453 (image=quay.io/ceph/ceph:v20, name=busy_heisenberg, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:13:30 np0005558317 podman[75457]: 2025-12-13 07:13:30.721895454 +0000 UTC m=+0.016201019 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:31 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 13 02:13:31 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/779458911' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]: 
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]: {
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:    "fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:    "health": {
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "status": "HEALTH_OK",
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "checks": {},
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "mutes": []
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:    },
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:    "election_epoch": 5,
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:    "quorum": [
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        0
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:    ],
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:    "quorum_names": [
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "compute-0"
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:    ],
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:    "quorum_age": 8,
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:    "monmap": {
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "epoch": 1,
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "min_mon_release_name": "tentacle",
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "num_mons": 1
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:    },
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:    "osdmap": {
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "epoch": 1,
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "num_osds": 0,
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "num_up_osds": 0,
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "osd_up_since": 0,
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "num_in_osds": 0,
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "osd_in_since": 0,
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "num_remapped_pgs": 0
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:    },
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:    "pgmap": {
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "pgs_by_state": [],
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "num_pgs": 0,
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "num_pools": 0,
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "num_objects": 0,
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "data_bytes": 0,
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "bytes_used": 0,
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "bytes_avail": 0,
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "bytes_total": 0
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:    },
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:    "fsmap": {
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "epoch": 1,
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "btime": "2025-12-13T07:13:21:319345+0000",
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "by_rank": [],
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "up:standby": 0
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:    },
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:    "mgrmap": {
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "available": true,
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "num_standbys": 0,
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "modules": [
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:            "iostat",
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:            "nfs"
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        ],
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "services": {}
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:    },
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:    "servicemap": {
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "epoch": 1,
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "modified": "2025-12-13T07:13:21.320643+0000",
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:        "services": {}
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:    },
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]:    "progress_events": {}
Dec 13 02:13:31 np0005558317 busy_heisenberg[75470]: }
Dec 13 02:13:31 np0005558317 systemd[1]: libpod-62ddf3c3ca79cb43a49efbf89a7992724d93a071168c5defde18cd99a1299453.scope: Deactivated successfully.
Dec 13 02:13:31 np0005558317 podman[75497]: 2025-12-13 07:13:31.199339751 +0000 UTC m=+0.014690350 container died 62ddf3c3ca79cb43a49efbf89a7992724d93a071168c5defde18cd99a1299453 (image=quay.io/ceph/ceph:v20, name=busy_heisenberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:13:31 np0005558317 systemd[1]: var-lib-containers-storage-overlay-98036d5ff6ff5a5ea36863de1b3deffd92dd0110f6946c158a80b7e37a871da2-merged.mount: Deactivated successfully.
Dec 13 02:13:31 np0005558317 podman[75497]: 2025-12-13 07:13:31.215112906 +0000 UTC m=+0.030463485 container remove 62ddf3c3ca79cb43a49efbf89a7992724d93a071168c5defde18cd99a1299453 (image=quay.io/ceph/ceph:v20, name=busy_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 02:13:31 np0005558317 systemd[1]: libpod-conmon-62ddf3c3ca79cb43a49efbf89a7992724d93a071168c5defde18cd99a1299453.scope: Deactivated successfully.
Dec 13 02:13:31 np0005558317 podman[75508]: 2025-12-13 07:13:31.255570058 +0000 UTC m=+0.024169891 container create fff1a7f0181425627aa01d65a9467e734178729f7295447762a415918cc8638b (image=quay.io/ceph/ceph:v20, name=youthful_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 02:13:31 np0005558317 systemd[1]: Started libpod-conmon-fff1a7f0181425627aa01d65a9467e734178729f7295447762a415918cc8638b.scope.
Dec 13 02:13:31 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:31 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0451bc72b7e8a54f7f698e8b0b0f8e21f07a43d2c43466c66b9d35617e03747/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:31 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0451bc72b7e8a54f7f698e8b0b0f8e21f07a43d2c43466c66b9d35617e03747/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:31 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0451bc72b7e8a54f7f698e8b0b0f8e21f07a43d2c43466c66b9d35617e03747/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:31 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0451bc72b7e8a54f7f698e8b0b0f8e21f07a43d2c43466c66b9d35617e03747/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:31 np0005558317 podman[75508]: 2025-12-13 07:13:31.317307558 +0000 UTC m=+0.085907402 container init fff1a7f0181425627aa01d65a9467e734178729f7295447762a415918cc8638b (image=quay.io/ceph/ceph:v20, name=youthful_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 02:13:31 np0005558317 podman[75508]: 2025-12-13 07:13:31.320900854 +0000 UTC m=+0.089500688 container start fff1a7f0181425627aa01d65a9467e734178729f7295447762a415918cc8638b (image=quay.io/ceph/ceph:v20, name=youthful_panini, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 02:13:31 np0005558317 podman[75508]: 2025-12-13 07:13:31.321874935 +0000 UTC m=+0.090474770 container attach fff1a7f0181425627aa01d65a9467e734178729f7295447762a415918cc8638b (image=quay.io/ceph/ceph:v20, name=youthful_panini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:13:31 np0005558317 podman[75508]: 2025-12-13 07:13:31.245729577 +0000 UTC m=+0.014329431 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:31 np0005558317 ceph-mgr[75200]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 02:13:31 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.qsherl(active, since 2s)
Dec 13 02:13:31 np0005558317 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:13:31 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 13 02:13:31 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1106528978' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 13 02:13:31 np0005558317 youthful_panini[75521]: 
Dec 13 02:13:31 np0005558317 youthful_panini[75521]: [global]
Dec 13 02:13:31 np0005558317 youthful_panini[75521]: #011fsid = 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 02:13:31 np0005558317 youthful_panini[75521]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 13 02:13:31 np0005558317 youthful_panini[75521]: #011osd_crush_chooseleaf_type = 0
Dec 13 02:13:31 np0005558317 systemd[1]: libpod-fff1a7f0181425627aa01d65a9467e734178729f7295447762a415918cc8638b.scope: Deactivated successfully.
Dec 13 02:13:31 np0005558317 podman[75508]: 2025-12-13 07:13:31.630256323 +0000 UTC m=+0.398856167 container died fff1a7f0181425627aa01d65a9467e734178729f7295447762a415918cc8638b (image=quay.io/ceph/ceph:v20, name=youthful_panini, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 02:13:31 np0005558317 systemd[1]: var-lib-containers-storage-overlay-a0451bc72b7e8a54f7f698e8b0b0f8e21f07a43d2c43466c66b9d35617e03747-merged.mount: Deactivated successfully.
Dec 13 02:13:31 np0005558317 podman[75508]: 2025-12-13 07:13:31.647006204 +0000 UTC m=+0.415606038 container remove fff1a7f0181425627aa01d65a9467e734178729f7295447762a415918cc8638b (image=quay.io/ceph/ceph:v20, name=youthful_panini, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 02:13:31 np0005558317 systemd[1]: libpod-conmon-fff1a7f0181425627aa01d65a9467e734178729f7295447762a415918cc8638b.scope: Deactivated successfully.
Dec 13 02:13:31 np0005558317 podman[75555]: 2025-12-13 07:13:31.684766005 +0000 UTC m=+0.024470697 container create dd89e672ba1e2fa098b9036ff8a90e16b5ee466ed6f52eb078e67cf7d148626d (image=quay.io/ceph/ceph:v20, name=amazing_hypatia, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:13:31 np0005558317 systemd[1]: Started libpod-conmon-dd89e672ba1e2fa098b9036ff8a90e16b5ee466ed6f52eb078e67cf7d148626d.scope.
Dec 13 02:13:31 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:31 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f23360571854d379264998ac9afb4ecd652a173a6b2b25b7befaabad3375576b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:31 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f23360571854d379264998ac9afb4ecd652a173a6b2b25b7befaabad3375576b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:31 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f23360571854d379264998ac9afb4ecd652a173a6b2b25b7befaabad3375576b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:31 np0005558317 podman[75555]: 2025-12-13 07:13:31.725568767 +0000 UTC m=+0.065273479 container init dd89e672ba1e2fa098b9036ff8a90e16b5ee466ed6f52eb078e67cf7d148626d (image=quay.io/ceph/ceph:v20, name=amazing_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 02:13:31 np0005558317 podman[75555]: 2025-12-13 07:13:31.729307917 +0000 UTC m=+0.069012619 container start dd89e672ba1e2fa098b9036ff8a90e16b5ee466ed6f52eb078e67cf7d148626d (image=quay.io/ceph/ceph:v20, name=amazing_hypatia, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:13:31 np0005558317 podman[75555]: 2025-12-13 07:13:31.73040006 +0000 UTC m=+0.070104752 container attach dd89e672ba1e2fa098b9036ff8a90e16b5ee466ed6f52eb078e67cf7d148626d (image=quay.io/ceph/ceph:v20, name=amazing_hypatia, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 02:13:31 np0005558317 podman[75555]: 2025-12-13 07:13:31.674696444 +0000 UTC m=+0.014401156 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Dec 13 02:13:32 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3001210130' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Dec 13 02:13:32 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/1106528978' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 13 02:13:32 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/3001210130' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Dec 13 02:13:32 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3001210130' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 13 02:13:32 np0005558317 ceph-mgr[75200]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 13 02:13:32 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.qsherl(active, since 3s)
Dec 13 02:13:32 np0005558317 systemd[1]: libpod-dd89e672ba1e2fa098b9036ff8a90e16b5ee466ed6f52eb078e67cf7d148626d.scope: Deactivated successfully.
Dec 13 02:13:32 np0005558317 conmon[75568]: conmon dd89e672ba1e2fa098b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dd89e672ba1e2fa098b9036ff8a90e16b5ee466ed6f52eb078e67cf7d148626d.scope/container/memory.events
Dec 13 02:13:32 np0005558317 podman[75555]: 2025-12-13 07:13:32.502146605 +0000 UTC m=+0.841851297 container died dd89e672ba1e2fa098b9036ff8a90e16b5ee466ed6f52eb078e67cf7d148626d (image=quay.io/ceph/ceph:v20, name=amazing_hypatia, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:13:32 np0005558317 systemd[1]: var-lib-containers-storage-overlay-f23360571854d379264998ac9afb4ecd652a173a6b2b25b7befaabad3375576b-merged.mount: Deactivated successfully.
Dec 13 02:13:32 np0005558317 podman[75555]: 2025-12-13 07:13:32.526917547 +0000 UTC m=+0.866622239 container remove dd89e672ba1e2fa098b9036ff8a90e16b5ee466ed6f52eb078e67cf7d148626d (image=quay.io/ceph/ceph:v20, name=amazing_hypatia, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:13:32 np0005558317 systemd[1]: libpod-conmon-dd89e672ba1e2fa098b9036ff8a90e16b5ee466ed6f52eb078e67cf7d148626d.scope: Deactivated successfully.
Dec 13 02:13:32 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl[75196]: ignoring --setuser ceph since I am not root
Dec 13 02:13:32 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl[75196]: ignoring --setgroup ceph since I am not root
Dec 13 02:13:32 np0005558317 ceph-mgr[75200]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec 13 02:13:32 np0005558317 ceph-mgr[75200]: pidfile_write: ignore empty --pid-file
Dec 13 02:13:32 np0005558317 podman[75604]: 2025-12-13 07:13:32.579375361 +0000 UTC m=+0.032000895 container create 38c37013b84f3f7d75907f84b7fdf360591ea4366174164ffee834ca82fd7242 (image=quay.io/ceph/ceph:v20, name=trusting_hertz, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 02:13:32 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'alerts'
Dec 13 02:13:32 np0005558317 systemd[1]: Started libpod-conmon-38c37013b84f3f7d75907f84b7fdf360591ea4366174164ffee834ca82fd7242.scope.
Dec 13 02:13:32 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:32 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67742d6d53a4a08afeda6ff472d92059dfab7f6469a989995bc5cfbdf84fe12e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:32 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67742d6d53a4a08afeda6ff472d92059dfab7f6469a989995bc5cfbdf84fe12e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:32 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67742d6d53a4a08afeda6ff472d92059dfab7f6469a989995bc5cfbdf84fe12e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:32 np0005558317 podman[75604]: 2025-12-13 07:13:32.630812566 +0000 UTC m=+0.083438101 container init 38c37013b84f3f7d75907f84b7fdf360591ea4366174164ffee834ca82fd7242 (image=quay.io/ceph/ceph:v20, name=trusting_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 02:13:32 np0005558317 podman[75604]: 2025-12-13 07:13:32.634671191 +0000 UTC m=+0.087296715 container start 38c37013b84f3f7d75907f84b7fdf360591ea4366174164ffee834ca82fd7242 (image=quay.io/ceph/ceph:v20, name=trusting_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 02:13:32 np0005558317 podman[75604]: 2025-12-13 07:13:32.635597052 +0000 UTC m=+0.088222576 container attach 38c37013b84f3f7d75907f84b7fdf360591ea4366174164ffee834ca82fd7242 (image=quay.io/ceph/ceph:v20, name=trusting_hertz, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 02:13:32 np0005558317 podman[75604]: 2025-12-13 07:13:32.566223474 +0000 UTC m=+0.018849018 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:32 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'balancer'
Dec 13 02:13:32 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'cephadm'
Dec 13 02:13:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 13 02:13:32 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4121678077' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 13 02:13:32 np0005558317 trusting_hertz[75637]: {
Dec 13 02:13:32 np0005558317 trusting_hertz[75637]:    "epoch": 5,
Dec 13 02:13:32 np0005558317 trusting_hertz[75637]:    "available": true,
Dec 13 02:13:32 np0005558317 trusting_hertz[75637]:    "active_name": "compute-0.qsherl",
Dec 13 02:13:32 np0005558317 trusting_hertz[75637]:    "num_standby": 0
Dec 13 02:13:32 np0005558317 trusting_hertz[75637]: }
Dec 13 02:13:33 np0005558317 systemd[1]: libpod-38c37013b84f3f7d75907f84b7fdf360591ea4366174164ffee834ca82fd7242.scope: Deactivated successfully.
Dec 13 02:13:33 np0005558317 podman[75663]: 2025-12-13 07:13:33.036328743 +0000 UTC m=+0.017154283 container died 38c37013b84f3f7d75907f84b7fdf360591ea4366174164ffee834ca82fd7242 (image=quay.io/ceph/ceph:v20, name=trusting_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:13:33 np0005558317 systemd[1]: var-lib-containers-storage-overlay-67742d6d53a4a08afeda6ff472d92059dfab7f6469a989995bc5cfbdf84fe12e-merged.mount: Deactivated successfully.
Dec 13 02:13:33 np0005558317 podman[75663]: 2025-12-13 07:13:33.05463475 +0000 UTC m=+0.035460290 container remove 38c37013b84f3f7d75907f84b7fdf360591ea4366174164ffee834ca82fd7242 (image=quay.io/ceph/ceph:v20, name=trusting_hertz, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:13:33 np0005558317 systemd[1]: libpod-conmon-38c37013b84f3f7d75907f84b7fdf360591ea4366174164ffee834ca82fd7242.scope: Deactivated successfully.
Dec 13 02:13:33 np0005558317 podman[75677]: 2025-12-13 07:13:33.102310314 +0000 UTC m=+0.028954809 container create 7b7cad99209a3b1fee269c39cb074dfb803877027f9276257e969e59db87c9a2 (image=quay.io/ceph/ceph:v20, name=wizardly_roentgen, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:13:33 np0005558317 systemd[1]: Started libpod-conmon-7b7cad99209a3b1fee269c39cb074dfb803877027f9276257e969e59db87c9a2.scope.
Dec 13 02:13:33 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:33 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a25372e05e47285013215b462a66482a8dc0ff2105418fa078ec64a5b236423/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:33 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a25372e05e47285013215b462a66482a8dc0ff2105418fa078ec64a5b236423/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:33 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a25372e05e47285013215b462a66482a8dc0ff2105418fa078ec64a5b236423/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:33 np0005558317 podman[75677]: 2025-12-13 07:13:33.160067521 +0000 UTC m=+0.086712005 container init 7b7cad99209a3b1fee269c39cb074dfb803877027f9276257e969e59db87c9a2 (image=quay.io/ceph/ceph:v20, name=wizardly_roentgen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:13:33 np0005558317 podman[75677]: 2025-12-13 07:13:33.164490175 +0000 UTC m=+0.091134661 container start 7b7cad99209a3b1fee269c39cb074dfb803877027f9276257e969e59db87c9a2 (image=quay.io/ceph/ceph:v20, name=wizardly_roentgen, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 02:13:33 np0005558317 podman[75677]: 2025-12-13 07:13:33.165512919 +0000 UTC m=+0.092157403 container attach 7b7cad99209a3b1fee269c39cb074dfb803877027f9276257e969e59db87c9a2 (image=quay.io/ceph/ceph:v20, name=wizardly_roentgen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 02:13:33 np0005558317 podman[75677]: 2025-12-13 07:13:33.091599487 +0000 UTC m=+0.018243972 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:33 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'crash'
Dec 13 02:13:33 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'dashboard'
Dec 13 02:13:33 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/3001210130' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 13 02:13:34 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'devicehealth'
Dec 13 02:13:34 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'diskprediction_local'
Dec 13 02:13:34 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl[75196]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 13 02:13:34 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl[75196]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 13 02:13:34 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl[75196]:  from numpy import show_config as show_numpy_config
Dec 13 02:13:34 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'influx'
Dec 13 02:13:34 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'insights'
Dec 13 02:13:34 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'iostat'
Dec 13 02:13:34 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'k8sevents'
Dec 13 02:13:34 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'localpool'
Dec 13 02:13:34 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'mds_autoscaler'
Dec 13 02:13:35 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'mirroring'
Dec 13 02:13:35 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'nfs'
Dec 13 02:13:35 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'orchestrator'
Dec 13 02:13:35 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'osd_perf_query'
Dec 13 02:13:35 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'osd_support'
Dec 13 02:13:35 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'pg_autoscaler'
Dec 13 02:13:35 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'progress'
Dec 13 02:13:35 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'prometheus'
Dec 13 02:13:36 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'rbd_support'
Dec 13 02:13:36 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'rgw'
Dec 13 02:13:36 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'rook'
Dec 13 02:13:37 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'selftest'
Dec 13 02:13:37 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'smb'
Dec 13 02:13:37 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'snap_schedule'
Dec 13 02:13:37 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'stats'
Dec 13 02:13:37 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'status'
Dec 13 02:13:37 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'telegraf'
Dec 13 02:13:37 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'telemetry'
Dec 13 02:13:37 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'test_orchestrator'
Dec 13 02:13:37 np0005558317 ceph-mgr[75200]: mgr[py] Loading python module 'volumes'
Dec 13 02:13:38 np0005558317 ceph-mon[74928]: log_channel(cluster) log [INF] : Active manager daemon compute-0.qsherl restarted
Dec 13 02:13:38 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec 13 02:13:38 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 02:13:38 np0005558317 ceph-mon[74928]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.qsherl
Dec 13 02:13:38 np0005558317 ceph-mgr[75200]: ms_deliver_dispatch: unhandled message 0x55cf1656c000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 13 02:13:38 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Dec 13 02:13:38 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 13 02:13:38 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec 13 02:13:38 np0005558317 ceph-mgr[75200]: mgr handle_mgr_map Activating!
Dec 13 02:13:38 np0005558317 ceph-mgr[75200]: mgr handle_mgr_map I am now activating
Dec 13 02:13:38 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec 13 02:13:38 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.qsherl(active, starting, since 0.00581274s)
Dec 13 02:13:38 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 13 02:13:38 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 13 02:13:38 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.qsherl", "id": "compute-0.qsherl"} v 0)
Dec 13 02:13:38 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "mgr metadata", "who": "compute-0.qsherl", "id": "compute-0.qsherl"} : dispatch
Dec 13 02:13:38 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 13 02:13:38 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "mds metadata"} : dispatch
Dec 13 02:13:38 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).mds e1 all = 1
Dec 13 02:13:38 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 13 02:13:38 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata"} : dispatch
Dec 13 02:13:38 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 13 02:13:38 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "mon metadata"} : dispatch
Dec 13 02:13:38 np0005558317 ceph-mgr[75200]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:13:38 np0005558317 ceph-mgr[75200]: mgr load Constructed class from module: balancer
Dec 13 02:13:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Starting
Dec 13 02:13:38 np0005558317 ceph-mon[74928]: log_channel(cluster) log [INF] : Manager daemon compute-0.qsherl is now available
Dec 13 02:13:38 np0005558317 ceph-mgr[75200]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:13:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:13:38
Dec 13 02:13:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:13:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 02:13:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] No pools available
Dec 13 02:13:38 np0005558317 ceph-mon[74928]: Active manager daemon compute-0.qsherl restarted
Dec 13 02:13:38 np0005558317 ceph-mon[74928]: Activating manager daemon compute-0.qsherl
Dec 13 02:13:38 np0005558317 ceph-mon[74928]: Manager daemon compute-0.qsherl is now available
Dec 13 02:13:39 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Dec 13 02:13:39 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:39 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Dec 13 02:13:39 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec 13 02:13:39 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Dec 13 02:13:39 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:39 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Dec 13 02:13:39 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: mgr load Constructed class from module: cephadm
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: mgr load Constructed class from module: crash
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: mgr load Constructed class from module: devicehealth
Dec 13 02:13:39 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 13 02:13:39 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [devicehealth INFO root] Starting
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: mgr load Constructed class from module: iostat
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: mgr load Constructed class from module: nfs
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: mgr load Constructed class from module: orchestrator
Dec 13 02:13:39 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 13 02:13:39 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: mgr load Constructed class from module: pg_autoscaler
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: mgr load Constructed class from module: progress
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [progress INFO root] Loading...
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [progress INFO root] No stored events to load
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [progress INFO root] Loaded [] historic events
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [progress INFO root] Loaded OSDMap, ready.
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] recovery thread starting
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] starting setup
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: mgr load Constructed class from module: rbd_support
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: mgr load Constructed class from module: status
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: mgr load Constructed class from module: telemetry
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:13:39 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qsherl/mirror_snapshot_schedule"} v 0)
Dec 13 02:13:39 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qsherl/mirror_snapshot_schedule"} : dispatch
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] PerfHandler: starting
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TaskHandler: starting
Dec 13 02:13:39 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qsherl/trash_purge_schedule"} v 0)
Dec 13 02:13:39 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qsherl/trash_purge_schedule"} : dispatch
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] setup complete
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: mgr load Constructed class from module: volumes
Dec 13 02:13:39 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.qsherl(active, since 1.0081s)
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 13 02:13:39 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 13 02:13:39 np0005558317 wizardly_roentgen[75699]: {
Dec 13 02:13:39 np0005558317 wizardly_roentgen[75699]:    "mgrmap_epoch": 7,
Dec 13 02:13:39 np0005558317 wizardly_roentgen[75699]:    "initialized": true
Dec 13 02:13:39 np0005558317 wizardly_roentgen[75699]: }
Dec 13 02:13:39 np0005558317 systemd[1]: libpod-7b7cad99209a3b1fee269c39cb074dfb803877027f9276257e969e59db87c9a2.scope: Deactivated successfully.
Dec 13 02:13:39 np0005558317 podman[75677]: 2025-12-13 07:13:39.206526831 +0000 UTC m=+6.133171315 container died 7b7cad99209a3b1fee269c39cb074dfb803877027f9276257e969e59db87c9a2 (image=quay.io/ceph/ceph:v20, name=wizardly_roentgen, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 02:13:39 np0005558317 systemd[1]: var-lib-containers-storage-overlay-0a25372e05e47285013215b462a66482a8dc0ff2105418fa078ec64a5b236423-merged.mount: Deactivated successfully.
Dec 13 02:13:39 np0005558317 podman[75677]: 2025-12-13 07:13:39.231300537 +0000 UTC m=+6.157945022 container remove 7b7cad99209a3b1fee269c39cb074dfb803877027f9276257e969e59db87c9a2 (image=quay.io/ceph/ceph:v20, name=wizardly_roentgen, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 02:13:39 np0005558317 systemd[1]: libpod-conmon-7b7cad99209a3b1fee269c39cb074dfb803877027f9276257e969e59db87c9a2.scope: Deactivated successfully.
Dec 13 02:13:39 np0005558317 podman[75842]: 2025-12-13 07:13:39.276251577 +0000 UTC m=+0.030227331 container create 72da5cc3dd0c213ae66d8337e0c5a03127dd8ac40d60f17030a52076271c0bcc (image=quay.io/ceph/ceph:v20, name=friendly_moore, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 02:13:39 np0005558317 systemd[1]: Started libpod-conmon-72da5cc3dd0c213ae66d8337e0c5a03127dd8ac40d60f17030a52076271c0bcc.scope.
Dec 13 02:13:39 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:39 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98bf8a4748430898131f0a24444b9f30d20ba8ff9a86417fdf77b477ac3f18b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:39 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98bf8a4748430898131f0a24444b9f30d20ba8ff9a86417fdf77b477ac3f18b7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:39 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98bf8a4748430898131f0a24444b9f30d20ba8ff9a86417fdf77b477ac3f18b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:39 np0005558317 podman[75842]: 2025-12-13 07:13:39.323663936 +0000 UTC m=+0.077639709 container init 72da5cc3dd0c213ae66d8337e0c5a03127dd8ac40d60f17030a52076271c0bcc (image=quay.io/ceph/ceph:v20, name=friendly_moore, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 02:13:39 np0005558317 podman[75842]: 2025-12-13 07:13:39.32790542 +0000 UTC m=+0.081881173 container start 72da5cc3dd0c213ae66d8337e0c5a03127dd8ac40d60f17030a52076271c0bcc (image=quay.io/ceph/ceph:v20, name=friendly_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 02:13:39 np0005558317 podman[75842]: 2025-12-13 07:13:39.329000028 +0000 UTC m=+0.082975801 container attach 72da5cc3dd0c213ae66d8337e0c5a03127dd8ac40d60f17030a52076271c0bcc (image=quay.io/ceph/ceph:v20, name=friendly_moore, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 02:13:39 np0005558317 podman[75842]: 2025-12-13 07:13:39.264822761 +0000 UTC m=+0.018798534 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:39 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Dec 13 02:13:39 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/699535926' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Dec 13 02:13:40 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:40 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:40 np0005558317 ceph-mon[74928]: Found migration_current of "None". Setting to last migration.
Dec 13 02:13:40 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:40 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:40 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qsherl/mirror_snapshot_schedule"} : dispatch
Dec 13 02:13:40 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qsherl/trash_purge_schedule"} : dispatch
Dec 13 02:13:40 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/699535926' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Dec 13 02:13:40 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/699535926' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Dec 13 02:13:40 np0005558317 friendly_moore[75857]: module 'orchestrator' is already enabled (always-on)
Dec 13 02:13:40 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.qsherl(active, since 2s)
Dec 13 02:13:40 np0005558317 ceph-mgr[75200]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 02:13:40 np0005558317 systemd[1]: libpod-72da5cc3dd0c213ae66d8337e0c5a03127dd8ac40d60f17030a52076271c0bcc.scope: Deactivated successfully.
Dec 13 02:13:40 np0005558317 podman[75842]: 2025-12-13 07:13:40.199701185 +0000 UTC m=+0.953676938 container died 72da5cc3dd0c213ae66d8337e0c5a03127dd8ac40d60f17030a52076271c0bcc (image=quay.io/ceph/ceph:v20, name=friendly_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:13:40 np0005558317 systemd[1]: var-lib-containers-storage-overlay-98bf8a4748430898131f0a24444b9f30d20ba8ff9a86417fdf77b477ac3f18b7-merged.mount: Deactivated successfully.
Dec 13 02:13:40 np0005558317 podman[75842]: 2025-12-13 07:13:40.229895432 +0000 UTC m=+0.983871185 container remove 72da5cc3dd0c213ae66d8337e0c5a03127dd8ac40d60f17030a52076271c0bcc (image=quay.io/ceph/ceph:v20, name=friendly_moore, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:13:40 np0005558317 systemd[1]: libpod-conmon-72da5cc3dd0c213ae66d8337e0c5a03127dd8ac40d60f17030a52076271c0bcc.scope: Deactivated successfully.
Dec 13 02:13:40 np0005558317 podman[75894]: 2025-12-13 07:13:40.288176855 +0000 UTC m=+0.044136271 container create 7d1d7ffdecd4c1414bbf71569d441acbbcdf592b24e66d20e92f916e52638e7e (image=quay.io/ceph/ceph:v20, name=beautiful_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:13:40 np0005558317 systemd[1]: Started libpod-conmon-7d1d7ffdecd4c1414bbf71569d441acbbcdf592b24e66d20e92f916e52638e7e.scope.
Dec 13 02:13:40 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:40 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ea907f8c6f93456d77ac6861004f854dae845d4252a877191d4c9bbf8bcd7b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:40 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ea907f8c6f93456d77ac6861004f854dae845d4252a877191d4c9bbf8bcd7b7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:40 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ea907f8c6f93456d77ac6861004f854dae845d4252a877191d4c9bbf8bcd7b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:40 np0005558317 podman[75894]: 2025-12-13 07:13:40.344468617 +0000 UTC m=+0.100428043 container init 7d1d7ffdecd4c1414bbf71569d441acbbcdf592b24e66d20e92f916e52638e7e (image=quay.io/ceph/ceph:v20, name=beautiful_wu, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:13:40 np0005558317 ceph-mgr[75200]: [cephadm INFO cherrypy.error] [13/Dec/2025:07:13:40] ENGINE Bus STARTING
Dec 13 02:13:40 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : [13/Dec/2025:07:13:40] ENGINE Bus STARTING
Dec 13 02:13:40 np0005558317 podman[75894]: 2025-12-13 07:13:40.348179695 +0000 UTC m=+0.104139110 container start 7d1d7ffdecd4c1414bbf71569d441acbbcdf592b24e66d20e92f916e52638e7e (image=quay.io/ceph/ceph:v20, name=beautiful_wu, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 02:13:40 np0005558317 podman[75894]: 2025-12-13 07:13:40.349128278 +0000 UTC m=+0.105087693 container attach 7d1d7ffdecd4c1414bbf71569d441acbbcdf592b24e66d20e92f916e52638e7e (image=quay.io/ceph/ceph:v20, name=beautiful_wu, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 02:13:40 np0005558317 podman[75894]: 2025-12-13 07:13:40.261666914 +0000 UTC m=+0.017626340 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:40 np0005558317 ceph-mgr[75200]: [cephadm INFO cherrypy.error] [13/Dec/2025:07:13:40] ENGINE Serving on https://192.168.122.100:7150
Dec 13 02:13:40 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : [13/Dec/2025:07:13:40] ENGINE Serving on https://192.168.122.100:7150
Dec 13 02:13:40 np0005558317 ceph-mgr[75200]: [cephadm INFO cherrypy.error] [13/Dec/2025:07:13:40] ENGINE Client ('192.168.122.100', 58058) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 13 02:13:40 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : [13/Dec/2025:07:13:40] ENGINE Client ('192.168.122.100', 58058) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 13 02:13:40 np0005558317 ceph-mgr[75200]: [cephadm INFO cherrypy.error] [13/Dec/2025:07:13:40] ENGINE Serving on http://192.168.122.100:8765
Dec 13 02:13:40 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : [13/Dec/2025:07:13:40] ENGINE Serving on http://192.168.122.100:8765
Dec 13 02:13:40 np0005558317 ceph-mgr[75200]: [cephadm INFO cherrypy.error] [13/Dec/2025:07:13:40] ENGINE Bus STARTED
Dec 13 02:13:40 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : [13/Dec/2025:07:13:40] ENGINE Bus STARTED
Dec 13 02:13:40 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 13 02:13:40 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 02:13:40 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:13:40 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Dec 13 02:13:40 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:40 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 13 02:13:40 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 02:13:40 np0005558317 systemd[1]: libpod-7d1d7ffdecd4c1414bbf71569d441acbbcdf592b24e66d20e92f916e52638e7e.scope: Deactivated successfully.
Dec 13 02:13:40 np0005558317 podman[75956]: 2025-12-13 07:13:40.707369175 +0000 UTC m=+0.016763619 container died 7d1d7ffdecd4c1414bbf71569d441acbbcdf592b24e66d20e92f916e52638e7e (image=quay.io/ceph/ceph:v20, name=beautiful_wu, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 02:13:40 np0005558317 systemd[1]: var-lib-containers-storage-overlay-2ea907f8c6f93456d77ac6861004f854dae845d4252a877191d4c9bbf8bcd7b7-merged.mount: Deactivated successfully.
Dec 13 02:13:40 np0005558317 podman[75956]: 2025-12-13 07:13:40.722023997 +0000 UTC m=+0.031418441 container remove 7d1d7ffdecd4c1414bbf71569d441acbbcdf592b24e66d20e92f916e52638e7e (image=quay.io/ceph/ceph:v20, name=beautiful_wu, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 02:13:40 np0005558317 systemd[1]: libpod-conmon-7d1d7ffdecd4c1414bbf71569d441acbbcdf592b24e66d20e92f916e52638e7e.scope: Deactivated successfully.
Dec 13 02:13:40 np0005558317 podman[75968]: 2025-12-13 07:13:40.762234625 +0000 UTC m=+0.024221979 container create 67a57ff5cdb6c7fc69f1984da93db2f67beef781c2f3ced97799fab594c61eba (image=quay.io/ceph/ceph:v20, name=pensive_banzai, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:13:40 np0005558317 systemd[1]: Started libpod-conmon-67a57ff5cdb6c7fc69f1984da93db2f67beef781c2f3ced97799fab594c61eba.scope.
Dec 13 02:13:40 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:40 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55c48038ca479d4e5298459e28dd90f45324dcbd981f1c9edfe0119e8a35a30e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:40 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55c48038ca479d4e5298459e28dd90f45324dcbd981f1c9edfe0119e8a35a30e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:40 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55c48038ca479d4e5298459e28dd90f45324dcbd981f1c9edfe0119e8a35a30e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:40 np0005558317 podman[75968]: 2025-12-13 07:13:40.808155189 +0000 UTC m=+0.070142564 container init 67a57ff5cdb6c7fc69f1984da93db2f67beef781c2f3ced97799fab594c61eba (image=quay.io/ceph/ceph:v20, name=pensive_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 02:13:40 np0005558317 podman[75968]: 2025-12-13 07:13:40.811887927 +0000 UTC m=+0.073875281 container start 67a57ff5cdb6c7fc69f1984da93db2f67beef781c2f3ced97799fab594c61eba (image=quay.io/ceph/ceph:v20, name=pensive_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:13:40 np0005558317 podman[75968]: 2025-12-13 07:13:40.812943652 +0000 UTC m=+0.074931026 container attach 67a57ff5cdb6c7fc69f1984da93db2f67beef781c2f3ced97799fab594c61eba (image=quay.io/ceph/ceph:v20, name=pensive_banzai, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 02:13:40 np0005558317 podman[75968]: 2025-12-13 07:13:40.752819013 +0000 UTC m=+0.014806387 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:41 np0005558317 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:13:41 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:13:41 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Dec 13 02:13:41 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:41 np0005558317 ceph-mgr[75200]: [cephadm INFO root] Set ssh ssh_user
Dec 13 02:13:41 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec 13 02:13:41 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Dec 13 02:13:41 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:41 np0005558317 ceph-mgr[75200]: [cephadm INFO root] Set ssh ssh_config
Dec 13 02:13:41 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec 13 02:13:41 np0005558317 ceph-mgr[75200]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec 13 02:13:41 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec 13 02:13:41 np0005558317 pensive_banzai[75981]: ssh user set to ceph-admin. sudo will be used
Dec 13 02:13:41 np0005558317 systemd[1]: libpod-67a57ff5cdb6c7fc69f1984da93db2f67beef781c2f3ced97799fab594c61eba.scope: Deactivated successfully.
Dec 13 02:13:41 np0005558317 podman[75968]: 2025-12-13 07:13:41.130790305 +0000 UTC m=+0.392777659 container died 67a57ff5cdb6c7fc69f1984da93db2f67beef781c2f3ced97799fab594c61eba (image=quay.io/ceph/ceph:v20, name=pensive_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 02:13:41 np0005558317 systemd[1]: var-lib-containers-storage-overlay-55c48038ca479d4e5298459e28dd90f45324dcbd981f1c9edfe0119e8a35a30e-merged.mount: Deactivated successfully.
Dec 13 02:13:41 np0005558317 podman[75968]: 2025-12-13 07:13:41.146727608 +0000 UTC m=+0.408714962 container remove 67a57ff5cdb6c7fc69f1984da93db2f67beef781c2f3ced97799fab594c61eba (image=quay.io/ceph/ceph:v20, name=pensive_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 02:13:41 np0005558317 systemd[1]: libpod-conmon-67a57ff5cdb6c7fc69f1984da93db2f67beef781c2f3ced97799fab594c61eba.scope: Deactivated successfully.
Dec 13 02:13:41 np0005558317 podman[76016]: 2025-12-13 07:13:41.184366281 +0000 UTC m=+0.025123234 container create 3f920aada7861a4b3f69940a11b9bc1ecf13f1127d1d382d18db57a4ebada972 (image=quay.io/ceph/ceph:v20, name=great_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 02:13:41 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/699535926' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Dec 13 02:13:41 np0005558317 ceph-mon[74928]: [13/Dec/2025:07:13:40] ENGINE Bus STARTING
Dec 13 02:13:41 np0005558317 ceph-mon[74928]: [13/Dec/2025:07:13:40] ENGINE Serving on https://192.168.122.100:7150
Dec 13 02:13:41 np0005558317 ceph-mon[74928]: [13/Dec/2025:07:13:40] ENGINE Client ('192.168.122.100', 58058) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 13 02:13:41 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:41 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:41 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:41 np0005558317 systemd[1]: Started libpod-conmon-3f920aada7861a4b3f69940a11b9bc1ecf13f1127d1d382d18db57a4ebada972.scope.
Dec 13 02:13:41 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:41 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41dba9670584cb2397acde0de69c4c3a8386aa90b8880c8018af3968d0935177/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:41 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41dba9670584cb2397acde0de69c4c3a8386aa90b8880c8018af3968d0935177/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:41 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41dba9670584cb2397acde0de69c4c3a8386aa90b8880c8018af3968d0935177/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:41 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41dba9670584cb2397acde0de69c4c3a8386aa90b8880c8018af3968d0935177/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:41 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41dba9670584cb2397acde0de69c4c3a8386aa90b8880c8018af3968d0935177/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:41 np0005558317 podman[76016]: 2025-12-13 07:13:41.224063956 +0000 UTC m=+0.064820929 container init 3f920aada7861a4b3f69940a11b9bc1ecf13f1127d1d382d18db57a4ebada972 (image=quay.io/ceph/ceph:v20, name=great_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 02:13:41 np0005558317 podman[76016]: 2025-12-13 07:13:41.230485378 +0000 UTC m=+0.071242332 container start 3f920aada7861a4b3f69940a11b9bc1ecf13f1127d1d382d18db57a4ebada972 (image=quay.io/ceph/ceph:v20, name=great_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:13:41 np0005558317 podman[76016]: 2025-12-13 07:13:41.231868619 +0000 UTC m=+0.072625572 container attach 3f920aada7861a4b3f69940a11b9bc1ecf13f1127d1d382d18db57a4ebada972 (image=quay.io/ceph/ceph:v20, name=great_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:13:41 np0005558317 podman[76016]: 2025-12-13 07:13:41.174346143 +0000 UTC m=+0.015103106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:41 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:13:41 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Dec 13 02:13:41 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:41 np0005558317 ceph-mgr[75200]: [cephadm INFO root] Set ssh ssh_identity_key
Dec 13 02:13:41 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec 13 02:13:41 np0005558317 ceph-mgr[75200]: [cephadm INFO root] Set ssh private key
Dec 13 02:13:41 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Set ssh private key
Dec 13 02:13:41 np0005558317 systemd[1]: libpod-3f920aada7861a4b3f69940a11b9bc1ecf13f1127d1d382d18db57a4ebada972.scope: Deactivated successfully.
Dec 13 02:13:41 np0005558317 podman[76016]: 2025-12-13 07:13:41.550704612 +0000 UTC m=+0.391461565 container died 3f920aada7861a4b3f69940a11b9bc1ecf13f1127d1d382d18db57a4ebada972 (image=quay.io/ceph/ceph:v20, name=great_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 02:13:41 np0005558317 systemd[1]: var-lib-containers-storage-overlay-41dba9670584cb2397acde0de69c4c3a8386aa90b8880c8018af3968d0935177-merged.mount: Deactivated successfully.
Dec 13 02:13:41 np0005558317 podman[76016]: 2025-12-13 07:13:41.569465284 +0000 UTC m=+0.410222237 container remove 3f920aada7861a4b3f69940a11b9bc1ecf13f1127d1d382d18db57a4ebada972 (image=quay.io/ceph/ceph:v20, name=great_banzai, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 02:13:41 np0005558317 systemd[1]: libpod-conmon-3f920aada7861a4b3f69940a11b9bc1ecf13f1127d1d382d18db57a4ebada972.scope: Deactivated successfully.
Dec 13 02:13:41 np0005558317 podman[76065]: 2025-12-13 07:13:41.607467049 +0000 UTC m=+0.025229875 container create 83ff7c1acdea74892cee76a7823c21b91697ef4ce1a1befd2f5f0f65b5c6cd79 (image=quay.io/ceph/ceph:v20, name=amazing_engelbart, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:13:41 np0005558317 systemd[1]: Started libpod-conmon-83ff7c1acdea74892cee76a7823c21b91697ef4ce1a1befd2f5f0f65b5c6cd79.scope.
Dec 13 02:13:41 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:41 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e80302afd7e483cfbeb4a67c5300e614b7907d1f5c3726db2236bd9dc504c1e6/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:41 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e80302afd7e483cfbeb4a67c5300e614b7907d1f5c3726db2236bd9dc504c1e6/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:41 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e80302afd7e483cfbeb4a67c5300e614b7907d1f5c3726db2236bd9dc504c1e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:41 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e80302afd7e483cfbeb4a67c5300e614b7907d1f5c3726db2236bd9dc504c1e6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:41 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e80302afd7e483cfbeb4a67c5300e614b7907d1f5c3726db2236bd9dc504c1e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:41 np0005558317 podman[76065]: 2025-12-13 07:13:41.64854064 +0000 UTC m=+0.066303466 container init 83ff7c1acdea74892cee76a7823c21b91697ef4ce1a1befd2f5f0f65b5c6cd79 (image=quay.io/ceph/ceph:v20, name=amazing_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 02:13:41 np0005558317 podman[76065]: 2025-12-13 07:13:41.653177538 +0000 UTC m=+0.070940354 container start 83ff7c1acdea74892cee76a7823c21b91697ef4ce1a1befd2f5f0f65b5c6cd79 (image=quay.io/ceph/ceph:v20, name=amazing_engelbart, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:13:41 np0005558317 podman[76065]: 2025-12-13 07:13:41.654337619 +0000 UTC m=+0.072100445 container attach 83ff7c1acdea74892cee76a7823c21b91697ef4ce1a1befd2f5f0f65b5c6cd79 (image=quay.io/ceph/ceph:v20, name=amazing_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 02:13:41 np0005558317 podman[76065]: 2025-12-13 07:13:41.596612763 +0000 UTC m=+0.014375599 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:41 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:13:41 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Dec 13 02:13:41 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:41 np0005558317 ceph-mgr[75200]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec 13 02:13:41 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec 13 02:13:41 np0005558317 systemd[1]: libpod-83ff7c1acdea74892cee76a7823c21b91697ef4ce1a1befd2f5f0f65b5c6cd79.scope: Deactivated successfully.
Dec 13 02:13:41 np0005558317 conmon[76080]: conmon 83ff7c1acdea74892cee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-83ff7c1acdea74892cee76a7823c21b91697ef4ce1a1befd2f5f0f65b5c6cd79.scope/container/memory.events
Dec 13 02:13:41 np0005558317 podman[76106]: 2025-12-13 07:13:41.998611457 +0000 UTC m=+0.015162017 container died 83ff7c1acdea74892cee76a7823c21b91697ef4ce1a1befd2f5f0f65b5c6cd79 (image=quay.io/ceph/ceph:v20, name=amazing_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:13:42 np0005558317 systemd[1]: var-lib-containers-storage-overlay-e80302afd7e483cfbeb4a67c5300e614b7907d1f5c3726db2236bd9dc504c1e6-merged.mount: Deactivated successfully.
Dec 13 02:13:42 np0005558317 podman[76106]: 2025-12-13 07:13:42.015950155 +0000 UTC m=+0.032500695 container remove 83ff7c1acdea74892cee76a7823c21b91697ef4ce1a1befd2f5f0f65b5c6cd79 (image=quay.io/ceph/ceph:v20, name=amazing_engelbart, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 02:13:42 np0005558317 systemd[1]: libpod-conmon-83ff7c1acdea74892cee76a7823c21b91697ef4ce1a1befd2f5f0f65b5c6cd79.scope: Deactivated successfully.
Dec 13 02:13:42 np0005558317 podman[76118]: 2025-12-13 07:13:42.058797611 +0000 UTC m=+0.025612003 container create 6e4e34c54eea083c9d4375b77b584551780e332ce6a5b0c50d6e5e990e6f8ea0 (image=quay.io/ceph/ceph:v20, name=zen_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:13:42 np0005558317 systemd[1]: Started libpod-conmon-6e4e34c54eea083c9d4375b77b584551780e332ce6a5b0c50d6e5e990e6f8ea0.scope.
Dec 13 02:13:42 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:42 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c852501fe91991baa1a0c77a870f86632bc865707cbe965468914e4c81f11d2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:42 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c852501fe91991baa1a0c77a870f86632bc865707cbe965468914e4c81f11d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:42 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c852501fe91991baa1a0c77a870f86632bc865707cbe965468914e4c81f11d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:42 np0005558317 podman[76118]: 2025-12-13 07:13:42.099166948 +0000 UTC m=+0.065981350 container init 6e4e34c54eea083c9d4375b77b584551780e332ce6a5b0c50d6e5e990e6f8ea0 (image=quay.io/ceph/ceph:v20, name=zen_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 02:13:42 np0005558317 podman[76118]: 2025-12-13 07:13:42.103400427 +0000 UTC m=+0.070214818 container start 6e4e34c54eea083c9d4375b77b584551780e332ce6a5b0c50d6e5e990e6f8ea0 (image=quay.io/ceph/ceph:v20, name=zen_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:13:42 np0005558317 podman[76118]: 2025-12-13 07:13:42.104445622 +0000 UTC m=+0.071260014 container attach 6e4e34c54eea083c9d4375b77b584551780e332ce6a5b0c50d6e5e990e6f8ea0 (image=quay.io/ceph/ceph:v20, name=zen_chebyshev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 02:13:42 np0005558317 podman[76118]: 2025-12-13 07:13:42.048511262 +0000 UTC m=+0.015325674 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:42 np0005558317 ceph-mgr[75200]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 02:13:42 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:13:42 np0005558317 zen_chebyshev[76131]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDlgKQw8WAbDTWUzeHJ5nqECv2AJbxtodMCfaa9/+87rAkbcBDDM+m2+wp4TmMHAGS1DSLXP1DQIuhhLRxztl/Gysfxve+g7QaHj+gk7fnWVCy+QBJ911iNvmhQP30hFsuaGnE33QpjQgzPKNppCtTvHw4/C26IUbp4X+TAfhW8CmYaeWBJI0fm9uOZnGmMO0YycxGwtjKDj6jqy2Vmab1EtnFv6N/SM1eHaViS9EcwvOLyOF5ogBiL2RRMJHZ89GA4I3c2T2jaujU2X/TKH7lkhQ60CISQyPgyeZyYmXP4IX1iEySEWn7dkw5Pd7w8ZgrNuRejDe6BSDQGJDncCTq2Mue7LQ61JvIWUMD8YyqeKRnuQbSNeFVLTvmz9wO/1s532/3VDT75RUvWV9+OJd3/jbC5DtwRHYXrSH5yqNBZtB9q4q9TCIerhf5M6m341RyepNF5sf32n0ocRhNhI48E0EK5g/XvihxelmvRp/Wtldy7Ne87BaDFwIkdKRRzuRM= zuul@controller
Dec 13 02:13:42 np0005558317 systemd[1]: libpod-6e4e34c54eea083c9d4375b77b584551780e332ce6a5b0c50d6e5e990e6f8ea0.scope: Deactivated successfully.
Dec 13 02:13:42 np0005558317 conmon[76131]: conmon 6e4e34c54eea083c9d43 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6e4e34c54eea083c9d4375b77b584551780e332ce6a5b0c50d6e5e990e6f8ea0.scope/container/memory.events
Dec 13 02:13:42 np0005558317 podman[76118]: 2025-12-13 07:13:42.419938349 +0000 UTC m=+0.386752741 container died 6e4e34c54eea083c9d4375b77b584551780e332ce6a5b0c50d6e5e990e6f8ea0 (image=quay.io/ceph/ceph:v20, name=zen_chebyshev, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:13:42 np0005558317 systemd[1]: var-lib-containers-storage-overlay-4c852501fe91991baa1a0c77a870f86632bc865707cbe965468914e4c81f11d2-merged.mount: Deactivated successfully.
Dec 13 02:13:42 np0005558317 podman[76118]: 2025-12-13 07:13:42.438339775 +0000 UTC m=+0.405154167 container remove 6e4e34c54eea083c9d4375b77b584551780e332ce6a5b0c50d6e5e990e6f8ea0 (image=quay.io/ceph/ceph:v20, name=zen_chebyshev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Dec 13 02:13:42 np0005558317 systemd[1]: libpod-conmon-6e4e34c54eea083c9d4375b77b584551780e332ce6a5b0c50d6e5e990e6f8ea0.scope: Deactivated successfully.
Dec 13 02:13:42 np0005558317 podman[76166]: 2025-12-13 07:13:42.478662565 +0000 UTC m=+0.025843308 container create fe96fa435b331d9ee9e156957224e3931b814f9f393346102e47d8be70a8b1d5 (image=quay.io/ceph/ceph:v20, name=stupefied_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 02:13:42 np0005558317 systemd[1]: Started libpod-conmon-fe96fa435b331d9ee9e156957224e3931b814f9f393346102e47d8be70a8b1d5.scope.
Dec 13 02:13:42 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:42 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b6c4f373782b5ba3c83a613429540f38cb16e7b054bbee5e74403dc151e834/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:42 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b6c4f373782b5ba3c83a613429540f38cb16e7b054bbee5e74403dc151e834/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:42 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b6c4f373782b5ba3c83a613429540f38cb16e7b054bbee5e74403dc151e834/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:42 np0005558317 podman[76166]: 2025-12-13 07:13:42.526357895 +0000 UTC m=+0.073538628 container init fe96fa435b331d9ee9e156957224e3931b814f9f393346102e47d8be70a8b1d5 (image=quay.io/ceph/ceph:v20, name=stupefied_sammet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 02:13:42 np0005558317 podman[76166]: 2025-12-13 07:13:42.5306846 +0000 UTC m=+0.077865333 container start fe96fa435b331d9ee9e156957224e3931b814f9f393346102e47d8be70a8b1d5 (image=quay.io/ceph/ceph:v20, name=stupefied_sammet, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 02:13:42 np0005558317 podman[76166]: 2025-12-13 07:13:42.531754562 +0000 UTC m=+0.078935294 container attach fe96fa435b331d9ee9e156957224e3931b814f9f393346102e47d8be70a8b1d5 (image=quay.io/ceph/ceph:v20, name=stupefied_sammet, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:13:42 np0005558317 ceph-mon[74928]: [13/Dec/2025:07:13:40] ENGINE Serving on http://192.168.122.100:8765
Dec 13 02:13:42 np0005558317 ceph-mon[74928]: [13/Dec/2025:07:13:40] ENGINE Bus STARTED
Dec 13 02:13:42 np0005558317 ceph-mon[74928]: Set ssh ssh_user
Dec 13 02:13:42 np0005558317 ceph-mon[74928]: Set ssh ssh_config
Dec 13 02:13:42 np0005558317 ceph-mon[74928]: ssh user set to ceph-admin. sudo will be used
Dec 13 02:13:42 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:42 np0005558317 ceph-mon[74928]: Set ssh ssh_identity_key
Dec 13 02:13:42 np0005558317 ceph-mon[74928]: Set ssh private key
Dec 13 02:13:42 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:42 np0005558317 podman[76166]: 2025-12-13 07:13:42.468385564 +0000 UTC m=+0.015566297 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019901112 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:13:42 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:13:43 np0005558317 systemd[1]: Created slice User Slice of UID 42477.
Dec 13 02:13:43 np0005558317 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 13 02:13:43 np0005558317 systemd-logind[745]: New session 20 of user ceph-admin.
Dec 13 02:13:43 np0005558317 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 13 02:13:43 np0005558317 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:13:43 np0005558317 systemd[1]: Starting User Manager for UID 42477...
Dec 13 02:13:43 np0005558317 systemd[76210]: Queued start job for default target Main User Target.
Dec 13 02:13:43 np0005558317 systemd[76210]: Created slice User Application Slice.
Dec 13 02:13:43 np0005558317 systemd[76210]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 13 02:13:43 np0005558317 systemd[76210]: Started Daily Cleanup of User's Temporary Directories.
Dec 13 02:13:43 np0005558317 systemd[76210]: Reached target Paths.
Dec 13 02:13:43 np0005558317 systemd[76210]: Reached target Timers.
Dec 13 02:13:43 np0005558317 systemd[76210]: Starting D-Bus User Message Bus Socket...
Dec 13 02:13:43 np0005558317 systemd[76210]: Starting Create User's Volatile Files and Directories...
Dec 13 02:13:43 np0005558317 systemd[76210]: Finished Create User's Volatile Files and Directories.
Dec 13 02:13:43 np0005558317 systemd[76210]: Listening on D-Bus User Message Bus Socket.
Dec 13 02:13:43 np0005558317 systemd[76210]: Reached target Sockets.
Dec 13 02:13:43 np0005558317 systemd[76210]: Reached target Basic System.
Dec 13 02:13:43 np0005558317 systemd[76210]: Reached target Main User Target.
Dec 13 02:13:43 np0005558317 systemd[76210]: Startup finished in 87ms.
Dec 13 02:13:43 np0005558317 systemd[1]: Started User Manager for UID 42477.
Dec 13 02:13:43 np0005558317 systemd[1]: Started Session 20 of User ceph-admin.
Dec 13 02:13:43 np0005558317 systemd-logind[745]: New session 22 of user ceph-admin.
Dec 13 02:13:43 np0005558317 systemd[1]: Started Session 22 of User ceph-admin.
Dec 13 02:13:43 np0005558317 systemd-logind[745]: New session 23 of user ceph-admin.
Dec 13 02:13:43 np0005558317 systemd[1]: Started Session 23 of User ceph-admin.
Dec 13 02:13:43 np0005558317 ceph-mon[74928]: Set ssh ssh_identity_pub
Dec 13 02:13:43 np0005558317 systemd-logind[745]: New session 24 of user ceph-admin.
Dec 13 02:13:43 np0005558317 systemd[1]: Started Session 24 of User ceph-admin.
Dec 13 02:13:43 np0005558317 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec 13 02:13:43 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec 13 02:13:43 np0005558317 systemd-logind[745]: New session 25 of user ceph-admin.
Dec 13 02:13:44 np0005558317 systemd[1]: Started Session 25 of User ceph-admin.
Dec 13 02:13:44 np0005558317 ceph-mgr[75200]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 02:13:44 np0005558317 systemd-logind[745]: New session 26 of user ceph-admin.
Dec 13 02:13:44 np0005558317 systemd[1]: Started Session 26 of User ceph-admin.
Dec 13 02:13:44 np0005558317 systemd-logind[745]: New session 27 of user ceph-admin.
Dec 13 02:13:44 np0005558317 systemd[1]: Started Session 27 of User ceph-admin.
Dec 13 02:13:44 np0005558317 systemd-logind[745]: New session 28 of user ceph-admin.
Dec 13 02:13:44 np0005558317 systemd[1]: Started Session 28 of User ceph-admin.
Dec 13 02:13:45 np0005558317 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:13:45 np0005558317 systemd-logind[745]: New session 29 of user ceph-admin.
Dec 13 02:13:45 np0005558317 systemd[1]: Started Session 29 of User ceph-admin.
Dec 13 02:13:45 np0005558317 systemd-logind[745]: New session 30 of user ceph-admin.
Dec 13 02:13:45 np0005558317 systemd[1]: Started Session 30 of User ceph-admin.
Dec 13 02:13:45 np0005558317 ceph-mon[74928]: Deploying cephadm binary to compute-0
Dec 13 02:13:46 np0005558317 ceph-mgr[75200]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 02:13:46 np0005558317 systemd-logind[745]: New session 31 of user ceph-admin.
Dec 13 02:13:46 np0005558317 systemd[1]: Started Session 31 of User ceph-admin.
Dec 13 02:13:46 np0005558317 systemd-logind[745]: New session 32 of user ceph-admin.
Dec 13 02:13:46 np0005558317 systemd[1]: Started Session 32 of User ceph-admin.
Dec 13 02:13:46 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 13 02:13:46 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:46 np0005558317 ceph-mgr[75200]: [cephadm INFO root] Added host compute-0
Dec 13 02:13:46 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 13 02:13:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 13 02:13:47 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 02:13:47 np0005558317 stupefied_sammet[76180]: Added host 'compute-0' with addr '192.168.122.100'
Dec 13 02:13:47 np0005558317 systemd[1]: libpod-fe96fa435b331d9ee9e156957224e3931b814f9f393346102e47d8be70a8b1d5.scope: Deactivated successfully.
Dec 13 02:13:47 np0005558317 podman[76166]: 2025-12-13 07:13:47.018214057 +0000 UTC m=+4.565394790 container died fe96fa435b331d9ee9e156957224e3931b814f9f393346102e47d8be70a8b1d5 (image=quay.io/ceph/ceph:v20, name=stupefied_sammet, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:13:47 np0005558317 systemd[1]: var-lib-containers-storage-overlay-f8b6c4f373782b5ba3c83a613429540f38cb16e7b054bbee5e74403dc151e834-merged.mount: Deactivated successfully.
Dec 13 02:13:47 np0005558317 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:13:47 np0005558317 podman[76166]: 2025-12-13 07:13:47.044633878 +0000 UTC m=+4.591814611 container remove fe96fa435b331d9ee9e156957224e3931b814f9f393346102e47d8be70a8b1d5 (image=quay.io/ceph/ceph:v20, name=stupefied_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Dec 13 02:13:47 np0005558317 systemd[1]: libpod-conmon-fe96fa435b331d9ee9e156957224e3931b814f9f393346102e47d8be70a8b1d5.scope: Deactivated successfully.
Dec 13 02:13:47 np0005558317 podman[76593]: 2025-12-13 07:13:47.089481022 +0000 UTC m=+0.027002968 container create 292cec4b12eb3f98922d3a4b3d47b8da94f85232a1ebbe0940e5d565ee3cb7d7 (image=quay.io/ceph/ceph:v20, name=vigorous_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 02:13:47 np0005558317 systemd[1]: Started libpod-conmon-292cec4b12eb3f98922d3a4b3d47b8da94f85232a1ebbe0940e5d565ee3cb7d7.scope.
Dec 13 02:13:47 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:47 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd2d0a777d4b251de4f739e4e6828a4b2d568d928d1333bb959eae095026ce44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:47 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd2d0a777d4b251de4f739e4e6828a4b2d568d928d1333bb959eae095026ce44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:47 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd2d0a777d4b251de4f739e4e6828a4b2d568d928d1333bb959eae095026ce44/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:47 np0005558317 podman[76593]: 2025-12-13 07:13:47.161588681 +0000 UTC m=+0.099110626 container init 292cec4b12eb3f98922d3a4b3d47b8da94f85232a1ebbe0940e5d565ee3cb7d7 (image=quay.io/ceph/ceph:v20, name=vigorous_grothendieck, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:13:47 np0005558317 podman[76593]: 2025-12-13 07:13:47.166767707 +0000 UTC m=+0.104289662 container start 292cec4b12eb3f98922d3a4b3d47b8da94f85232a1ebbe0940e5d565ee3cb7d7 (image=quay.io/ceph/ceph:v20, name=vigorous_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 02:13:47 np0005558317 podman[76593]: 2025-12-13 07:13:47.16805088 +0000 UTC m=+0.105572845 container attach 292cec4b12eb3f98922d3a4b3d47b8da94f85232a1ebbe0940e5d565ee3cb7d7 (image=quay.io/ceph/ceph:v20, name=vigorous_grothendieck, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 02:13:47 np0005558317 podman[76593]: 2025-12-13 07:13:47.078186388 +0000 UTC m=+0.015708354 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:47 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:13:47 np0005558317 ceph-mgr[75200]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec 13 02:13:47 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec 13 02:13:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 13 02:13:47 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:47 np0005558317 vigorous_grothendieck[76635]: Scheduled mon update...
Dec 13 02:13:47 np0005558317 systemd[1]: libpod-292cec4b12eb3f98922d3a4b3d47b8da94f85232a1ebbe0940e5d565ee3cb7d7.scope: Deactivated successfully.
Dec 13 02:13:47 np0005558317 podman[76593]: 2025-12-13 07:13:47.507745529 +0000 UTC m=+0.445267474 container died 292cec4b12eb3f98922d3a4b3d47b8da94f85232a1ebbe0940e5d565ee3cb7d7 (image=quay.io/ceph/ceph:v20, name=vigorous_grothendieck, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:13:47 np0005558317 systemd[1]: var-lib-containers-storage-overlay-dd2d0a777d4b251de4f739e4e6828a4b2d568d928d1333bb959eae095026ce44-merged.mount: Deactivated successfully.
Dec 13 02:13:47 np0005558317 podman[76593]: 2025-12-13 07:13:47.526361187 +0000 UTC m=+0.463883132 container remove 292cec4b12eb3f98922d3a4b3d47b8da94f85232a1ebbe0940e5d565ee3cb7d7 (image=quay.io/ceph/ceph:v20, name=vigorous_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 02:13:47 np0005558317 systemd[1]: libpod-conmon-292cec4b12eb3f98922d3a4b3d47b8da94f85232a1ebbe0940e5d565ee3cb7d7.scope: Deactivated successfully.
Dec 13 02:13:47 np0005558317 podman[76692]: 2025-12-13 07:13:47.564464595 +0000 UTC m=+0.024819332 container create 2667b3d5af6d0cec7731e8836bc191e0a46c70837632db045cf4e447e5d4b56a (image=quay.io/ceph/ceph:v20, name=amazing_mirzakhani, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:13:47 np0005558317 systemd[1]: Started libpod-conmon-2667b3d5af6d0cec7731e8836bc191e0a46c70837632db045cf4e447e5d4b56a.scope.
Dec 13 02:13:47 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:47 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1df3459a4a7d0c55b3fb2b0b9a141e34c1f0f38a6fcbe89bd51a3525bf82bae/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:47 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1df3459a4a7d0c55b3fb2b0b9a141e34c1f0f38a6fcbe89bd51a3525bf82bae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:47 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1df3459a4a7d0c55b3fb2b0b9a141e34c1f0f38a6fcbe89bd51a3525bf82bae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:47 np0005558317 podman[76692]: 2025-12-13 07:13:47.614681125 +0000 UTC m=+0.075035873 container init 2667b3d5af6d0cec7731e8836bc191e0a46c70837632db045cf4e447e5d4b56a (image=quay.io/ceph/ceph:v20, name=amazing_mirzakhani, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 02:13:47 np0005558317 podman[76692]: 2025-12-13 07:13:47.619071919 +0000 UTC m=+0.079426658 container start 2667b3d5af6d0cec7731e8836bc191e0a46c70837632db045cf4e447e5d4b56a (image=quay.io/ceph/ceph:v20, name=amazing_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 02:13:47 np0005558317 podman[76692]: 2025-12-13 07:13:47.620078482 +0000 UTC m=+0.080433240 container attach 2667b3d5af6d0cec7731e8836bc191e0a46c70837632db045cf4e447e5d4b56a (image=quay.io/ceph/ceph:v20, name=amazing_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 02:13:47 np0005558317 podman[76692]: 2025-12-13 07:13:47.554788022 +0000 UTC m=+0.015142770 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052609 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:13:47 np0005558317 podman[76670]: 2025-12-13 07:13:47.860038733 +0000 UTC m=+0.567290535 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:47 np0005558317 podman[76740]: 2025-12-13 07:13:47.927109872 +0000 UTC m=+0.024311598 container create 919c9273aad726c7a39268fbd4acfd1114c71d7437004a18a6fc83bdb6b6866b (image=quay.io/ceph/ceph:v20, name=mystifying_franklin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 02:13:47 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:13:47 np0005558317 ceph-mgr[75200]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec 13 02:13:47 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec 13 02:13:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 13 02:13:47 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:47 np0005558317 amazing_mirzakhani[76706]: Scheduled mgr update...
Dec 13 02:13:47 np0005558317 systemd[1]: Started libpod-conmon-919c9273aad726c7a39268fbd4acfd1114c71d7437004a18a6fc83bdb6b6866b.scope.
Dec 13 02:13:47 np0005558317 systemd[1]: libpod-2667b3d5af6d0cec7731e8836bc191e0a46c70837632db045cf4e447e5d4b56a.scope: Deactivated successfully.
Dec 13 02:13:47 np0005558317 conmon[76706]: conmon 2667b3d5af6d0cec7731 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2667b3d5af6d0cec7731e8836bc191e0a46c70837632db045cf4e447e5d4b56a.scope/container/memory.events
Dec 13 02:13:47 np0005558317 podman[76692]: 2025-12-13 07:13:47.958933963 +0000 UTC m=+0.419288701 container died 2667b3d5af6d0cec7731e8836bc191e0a46c70837632db045cf4e447e5d4b56a (image=quay.io/ceph/ceph:v20, name=amazing_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 02:13:47 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:47 np0005558317 podman[76740]: 2025-12-13 07:13:47.971119172 +0000 UTC m=+0.068320907 container init 919c9273aad726c7a39268fbd4acfd1114c71d7437004a18a6fc83bdb6b6866b (image=quay.io/ceph/ceph:v20, name=mystifying_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 02:13:47 np0005558317 podman[76740]: 2025-12-13 07:13:47.97510181 +0000 UTC m=+0.072303546 container start 919c9273aad726c7a39268fbd4acfd1114c71d7437004a18a6fc83bdb6b6866b (image=quay.io/ceph/ceph:v20, name=mystifying_franklin, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 02:13:47 np0005558317 podman[76740]: 2025-12-13 07:13:47.976132918 +0000 UTC m=+0.073334654 container attach 919c9273aad726c7a39268fbd4acfd1114c71d7437004a18a6fc83bdb6b6866b (image=quay.io/ceph/ceph:v20, name=mystifying_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:13:47 np0005558317 podman[76692]: 2025-12-13 07:13:47.977945496 +0000 UTC m=+0.438300235 container remove 2667b3d5af6d0cec7731e8836bc191e0a46c70837632db045cf4e447e5d4b56a (image=quay.io/ceph/ceph:v20, name=amazing_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:13:47 np0005558317 systemd[1]: libpod-conmon-2667b3d5af6d0cec7731e8836bc191e0a46c70837632db045cf4e447e5d4b56a.scope: Deactivated successfully.
Dec 13 02:13:47 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:47 np0005558317 ceph-mon[74928]: Added host compute-0
Dec 13 02:13:47 np0005558317 ceph-mon[74928]: Saving service mon spec with placement count:5
Dec 13 02:13:47 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:48 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:48 np0005558317 podman[76740]: 2025-12-13 07:13:47.917586827 +0000 UTC m=+0.014788583 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:48 np0005558317 podman[76770]: 2025-12-13 07:13:48.025514559 +0000 UTC m=+0.026609851 container create 51c1ed5dc16c80f606ddcf3689a21e980d916efa4cffec951151e3cfc7b65278 (image=quay.io/ceph/ceph:v20, name=vigorous_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 02:13:48 np0005558317 systemd[1]: var-lib-containers-storage-overlay-d1df3459a4a7d0c55b3fb2b0b9a141e34c1f0f38a6fcbe89bd51a3525bf82bae-merged.mount: Deactivated successfully.
Dec 13 02:13:48 np0005558317 mystifying_franklin[76755]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Dec 13 02:13:48 np0005558317 systemd[1]: Started libpod-conmon-51c1ed5dc16c80f606ddcf3689a21e980d916efa4cffec951151e3cfc7b65278.scope.
Dec 13 02:13:48 np0005558317 systemd[1]: libpod-919c9273aad726c7a39268fbd4acfd1114c71d7437004a18a6fc83bdb6b6866b.scope: Deactivated successfully.
Dec 13 02:13:48 np0005558317 conmon[76755]: conmon 919c9273aad726c7a392 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-919c9273aad726c7a39268fbd4acfd1114c71d7437004a18a6fc83bdb6b6866b.scope/container/memory.events
Dec 13 02:13:48 np0005558317 podman[76740]: 2025-12-13 07:13:48.057854219 +0000 UTC m=+0.155055955 container died 919c9273aad726c7a39268fbd4acfd1114c71d7437004a18a6fc83bdb6b6866b (image=quay.io/ceph/ceph:v20, name=mystifying_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 02:13:48 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:48 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51760f9f9e3f87c2cc44a94c2d219bcfde17b3fd0f6b4721e8c605c13627b3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:48 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51760f9f9e3f87c2cc44a94c2d219bcfde17b3fd0f6b4721e8c605c13627b3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:48 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51760f9f9e3f87c2cc44a94c2d219bcfde17b3fd0f6b4721e8c605c13627b3b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:48 np0005558317 systemd[1]: var-lib-containers-storage-overlay-a5e61bf81b707fcbdf27747d5f17afa77f55dfc542c3397f9571c94baa9d8194-merged.mount: Deactivated successfully.
Dec 13 02:13:48 np0005558317 podman[76770]: 2025-12-13 07:13:48.079591147 +0000 UTC m=+0.080686428 container init 51c1ed5dc16c80f606ddcf3689a21e980d916efa4cffec951151e3cfc7b65278 (image=quay.io/ceph/ceph:v20, name=vigorous_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:13:48 np0005558317 podman[76740]: 2025-12-13 07:13:48.086390921 +0000 UTC m=+0.183592657 container remove 919c9273aad726c7a39268fbd4acfd1114c71d7437004a18a6fc83bdb6b6866b (image=quay.io/ceph/ceph:v20, name=mystifying_franklin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 02:13:48 np0005558317 podman[76770]: 2025-12-13 07:13:48.08834771 +0000 UTC m=+0.089442981 container start 51c1ed5dc16c80f606ddcf3689a21e980d916efa4cffec951151e3cfc7b65278 (image=quay.io/ceph/ceph:v20, name=vigorous_feistel, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Dec 13 02:13:48 np0005558317 podman[76770]: 2025-12-13 07:13:48.095511007 +0000 UTC m=+0.096606298 container attach 51c1ed5dc16c80f606ddcf3689a21e980d916efa4cffec951151e3cfc7b65278 (image=quay.io/ceph/ceph:v20, name=vigorous_feistel, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:13:48 np0005558317 systemd[1]: libpod-conmon-919c9273aad726c7a39268fbd4acfd1114c71d7437004a18a6fc83bdb6b6866b.scope: Deactivated successfully.
Dec 13 02:13:48 np0005558317 podman[76770]: 2025-12-13 07:13:48.014235233 +0000 UTC m=+0.015330534 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:48 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Dec 13 02:13:48 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:48 np0005558317 ceph-mgr[75200]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 02:13:48 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:13:48 np0005558317 ceph-mgr[75200]: [cephadm INFO root] Saving service crash spec with placement *
Dec 13 02:13:48 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec 13 02:13:48 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 13 02:13:48 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:48 np0005558317 vigorous_feistel[76783]: Scheduled crash update...
Dec 13 02:13:48 np0005558317 systemd[1]: libpod-51c1ed5dc16c80f606ddcf3689a21e980d916efa4cffec951151e3cfc7b65278.scope: Deactivated successfully.
Dec 13 02:13:48 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:13:48 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:48 np0005558317 podman[76887]: 2025-12-13 07:13:48.468030238 +0000 UTC m=+0.018597546 container died 51c1ed5dc16c80f606ddcf3689a21e980d916efa4cffec951151e3cfc7b65278 (image=quay.io/ceph/ceph:v20, name=vigorous_feistel, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:13:48 np0005558317 systemd[1]: var-lib-containers-storage-overlay-c51760f9f9e3f87c2cc44a94c2d219bcfde17b3fd0f6b4721e8c605c13627b3b-merged.mount: Deactivated successfully.
Dec 13 02:13:48 np0005558317 podman[76887]: 2025-12-13 07:13:48.485681785 +0000 UTC m=+0.036249092 container remove 51c1ed5dc16c80f606ddcf3689a21e980d916efa4cffec951151e3cfc7b65278 (image=quay.io/ceph/ceph:v20, name=vigorous_feistel, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:13:48 np0005558317 systemd[1]: libpod-conmon-51c1ed5dc16c80f606ddcf3689a21e980d916efa4cffec951151e3cfc7b65278.scope: Deactivated successfully.
Dec 13 02:13:48 np0005558317 podman[76923]: 2025-12-13 07:13:48.53302902 +0000 UTC m=+0.028336256 container create a085c3f3c87add73250f1ee65dfbc598c543b92df4323955faa9961a0bd5459b (image=quay.io/ceph/ceph:v20, name=angry_shaw, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:13:48 np0005558317 systemd[1]: Started libpod-conmon-a085c3f3c87add73250f1ee65dfbc598c543b92df4323955faa9961a0bd5459b.scope.
Dec 13 02:13:48 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:48 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651281a5a237fc10c87f7d45bd56b1fc13ab68c1519e179ff9a75189eec31324/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:48 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651281a5a237fc10c87f7d45bd56b1fc13ab68c1519e179ff9a75189eec31324/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:48 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651281a5a237fc10c87f7d45bd56b1fc13ab68c1519e179ff9a75189eec31324/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:48 np0005558317 podman[76923]: 2025-12-13 07:13:48.590135665 +0000 UTC m=+0.085442920 container init a085c3f3c87add73250f1ee65dfbc598c543b92df4323955faa9961a0bd5459b (image=quay.io/ceph/ceph:v20, name=angry_shaw, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:13:48 np0005558317 podman[76923]: 2025-12-13 07:13:48.595003838 +0000 UTC m=+0.090311083 container start a085c3f3c87add73250f1ee65dfbc598c543b92df4323955faa9961a0bd5459b (image=quay.io/ceph/ceph:v20, name=angry_shaw, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:13:48 np0005558317 podman[76923]: 2025-12-13 07:13:48.597464854 +0000 UTC m=+0.092772109 container attach a085c3f3c87add73250f1ee65dfbc598c543b92df4323955faa9961a0bd5459b (image=quay.io/ceph/ceph:v20, name=angry_shaw, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 02:13:48 np0005558317 podman[76923]: 2025-12-13 07:13:48.520250977 +0000 UTC m=+0.015558242 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:48 np0005558317 podman[77020]: 2025-12-13 07:13:48.891272391 +0000 UTC m=+0.046395847 container exec 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 02:13:48 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Dec 13 02:13:48 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/113461477' entity='client.admin' 
Dec 13 02:13:48 np0005558317 systemd[1]: libpod-a085c3f3c87add73250f1ee65dfbc598c543b92df4323955faa9961a0bd5459b.scope: Deactivated successfully.
Dec 13 02:13:48 np0005558317 podman[76923]: 2025-12-13 07:13:48.920623573 +0000 UTC m=+0.415930809 container died a085c3f3c87add73250f1ee65dfbc598c543b92df4323955faa9961a0bd5459b (image=quay.io/ceph/ceph:v20, name=angry_shaw, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:13:48 np0005558317 systemd[1]: var-lib-containers-storage-overlay-651281a5a237fc10c87f7d45bd56b1fc13ab68c1519e179ff9a75189eec31324-merged.mount: Deactivated successfully.
Dec 13 02:13:48 np0005558317 podman[76923]: 2025-12-13 07:13:48.944758149 +0000 UTC m=+0.440065384 container remove a085c3f3c87add73250f1ee65dfbc598c543b92df4323955faa9961a0bd5459b (image=quay.io/ceph/ceph:v20, name=angry_shaw, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 02:13:48 np0005558317 systemd[1]: libpod-conmon-a085c3f3c87add73250f1ee65dfbc598c543b92df4323955faa9961a0bd5459b.scope: Deactivated successfully.
Dec 13 02:13:48 np0005558317 podman[77020]: 2025-12-13 07:13:48.971718015 +0000 UTC m=+0.126841472 container exec_died 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 02:13:48 np0005558317 podman[77049]: 2025-12-13 07:13:48.996147183 +0000 UTC m=+0.031432246 container create 0b8f07f5bfba23797a0eefebdf8527402b446ea41040482e58e9122c99734b02 (image=quay.io/ceph/ceph:v20, name=practical_bhaskara, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec 13 02:13:49 np0005558317 systemd[1]: Started libpod-conmon-0b8f07f5bfba23797a0eefebdf8527402b446ea41040482e58e9122c99734b02.scope.
Dec 13 02:13:49 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:49 np0005558317 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:13:49 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f0102fb24047bd1132c317cbf12d1dca4155d17fc330df05ac0eefaff1059a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:49 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f0102fb24047bd1132c317cbf12d1dca4155d17fc330df05ac0eefaff1059a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:49 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f0102fb24047bd1132c317cbf12d1dca4155d17fc330df05ac0eefaff1059a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:49 np0005558317 podman[77049]: 2025-12-13 07:13:49.054293934 +0000 UTC m=+0.089579016 container init 0b8f07f5bfba23797a0eefebdf8527402b446ea41040482e58e9122c99734b02 (image=quay.io/ceph/ceph:v20, name=practical_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 02:13:49 np0005558317 podman[77049]: 2025-12-13 07:13:49.061267254 +0000 UTC m=+0.096552316 container start 0b8f07f5bfba23797a0eefebdf8527402b446ea41040482e58e9122c99734b02 (image=quay.io/ceph/ceph:v20, name=practical_bhaskara, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:13:49 np0005558317 podman[77049]: 2025-12-13 07:13:49.062346644 +0000 UTC m=+0.097631706 container attach 0b8f07f5bfba23797a0eefebdf8527402b446ea41040482e58e9122c99734b02 (image=quay.io/ceph/ceph:v20, name=practical_bhaskara, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:13:49 np0005558317 podman[77049]: 2025-12-13 07:13:48.984314407 +0000 UTC m=+0.019599489 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:49 np0005558317 ceph-mon[74928]: Saving service mgr spec with placement count:2
Dec 13 02:13:49 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:49 np0005558317 ceph-mon[74928]: Saving service crash spec with placement *
Dec 13 02:13:49 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:49 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:49 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/113461477' entity='client.admin' 
Dec 13 02:13:49 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:13:49 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:49 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:13:49 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Dec 13 02:13:49 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:49 np0005558317 systemd[1]: libpod-0b8f07f5bfba23797a0eefebdf8527402b446ea41040482e58e9122c99734b02.scope: Deactivated successfully.
Dec 13 02:13:49 np0005558317 podman[77049]: 2025-12-13 07:13:49.403766465 +0000 UTC m=+0.439051527 container died 0b8f07f5bfba23797a0eefebdf8527402b446ea41040482e58e9122c99734b02 (image=quay.io/ceph/ceph:v20, name=practical_bhaskara, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:13:49 np0005558317 systemd[1]: var-lib-containers-storage-overlay-12f0102fb24047bd1132c317cbf12d1dca4155d17fc330df05ac0eefaff1059a-merged.mount: Deactivated successfully.
Dec 13 02:13:49 np0005558317 podman[77049]: 2025-12-13 07:13:49.441542467 +0000 UTC m=+0.476827529 container remove 0b8f07f5bfba23797a0eefebdf8527402b446ea41040482e58e9122c99734b02 (image=quay.io/ceph/ceph:v20, name=practical_bhaskara, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:13:49 np0005558317 systemd[1]: libpod-conmon-0b8f07f5bfba23797a0eefebdf8527402b446ea41040482e58e9122c99734b02.scope: Deactivated successfully.
Dec 13 02:13:49 np0005558317 podman[77193]: 2025-12-13 07:13:49.497679558 +0000 UTC m=+0.032638914 container create b81b90d10f82c3ee5c194b0b5bccba3bafa014e53adb3da1b2db07e9384af142 (image=quay.io/ceph/ceph:v20, name=reverent_kilby, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 02:13:49 np0005558317 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77212 (sysctl)
Dec 13 02:13:49 np0005558317 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec 13 02:13:49 np0005558317 systemd[1]: Started libpod-conmon-b81b90d10f82c3ee5c194b0b5bccba3bafa014e53adb3da1b2db07e9384af142.scope.
Dec 13 02:13:49 np0005558317 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec 13 02:13:49 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:49 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/016e6141d7190888b0c651d223df1ba71c6fb81017726a27be40a028dbab9bc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:49 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/016e6141d7190888b0c651d223df1ba71c6fb81017726a27be40a028dbab9bc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:49 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/016e6141d7190888b0c651d223df1ba71c6fb81017726a27be40a028dbab9bc6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:49 np0005558317 podman[77193]: 2025-12-13 07:13:49.560416519 +0000 UTC m=+0.095375875 container init b81b90d10f82c3ee5c194b0b5bccba3bafa014e53adb3da1b2db07e9384af142 (image=quay.io/ceph/ceph:v20, name=reverent_kilby, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 02:13:49 np0005558317 podman[77193]: 2025-12-13 07:13:49.567214769 +0000 UTC m=+0.102174115 container start b81b90d10f82c3ee5c194b0b5bccba3bafa014e53adb3da1b2db07e9384af142 (image=quay.io/ceph/ceph:v20, name=reverent_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:13:49 np0005558317 podman[77193]: 2025-12-13 07:13:49.568643615 +0000 UTC m=+0.103602961 container attach b81b90d10f82c3ee5c194b0b5bccba3bafa014e53adb3da1b2db07e9384af142 (image=quay.io/ceph/ceph:v20, name=reverent_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 02:13:49 np0005558317 podman[77193]: 2025-12-13 07:13:49.484494259 +0000 UTC m=+0.019453605 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:49 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:13:49 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 13 02:13:49 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:49 np0005558317 ceph-mgr[75200]: [cephadm INFO root] Added label _admin to host compute-0
Dec 13 02:13:49 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec 13 02:13:49 np0005558317 reverent_kilby[77219]: Added label _admin to host compute-0
Dec 13 02:13:49 np0005558317 systemd[1]: libpod-b81b90d10f82c3ee5c194b0b5bccba3bafa014e53adb3da1b2db07e9384af142.scope: Deactivated successfully.
Dec 13 02:13:49 np0005558317 podman[77193]: 2025-12-13 07:13:49.929738388 +0000 UTC m=+0.464697734 container died b81b90d10f82c3ee5c194b0b5bccba3bafa014e53adb3da1b2db07e9384af142 (image=quay.io/ceph/ceph:v20, name=reverent_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 02:13:49 np0005558317 systemd[1]: var-lib-containers-storage-overlay-016e6141d7190888b0c651d223df1ba71c6fb81017726a27be40a028dbab9bc6-merged.mount: Deactivated successfully.
Dec 13 02:13:49 np0005558317 podman[77193]: 2025-12-13 07:13:49.950883151 +0000 UTC m=+0.485842497 container remove b81b90d10f82c3ee5c194b0b5bccba3bafa014e53adb3da1b2db07e9384af142 (image=quay.io/ceph/ceph:v20, name=reverent_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 02:13:49 np0005558317 systemd[1]: libpod-conmon-b81b90d10f82c3ee5c194b0b5bccba3bafa014e53adb3da1b2db07e9384af142.scope: Deactivated successfully.
Dec 13 02:13:50 np0005558317 podman[77321]: 2025-12-13 07:13:49.999942726 +0000 UTC m=+0.031253119 container create 2496c47064739a0730a50459da8e107c9f62171eff56514322be5a5b48d143bc (image=quay.io/ceph/ceph:v20, name=hardcore_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:13:50 np0005558317 systemd[1]: Started libpod-conmon-2496c47064739a0730a50459da8e107c9f62171eff56514322be5a5b48d143bc.scope.
Dec 13 02:13:50 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:50 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7851f01e58ecfeceb28e5dc18b34d25d1884478a0e5c90d8c4f2a2b00859324/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:50 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7851f01e58ecfeceb28e5dc18b34d25d1884478a0e5c90d8c4f2a2b00859324/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:50 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7851f01e58ecfeceb28e5dc18b34d25d1884478a0e5c90d8c4f2a2b00859324/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:50 np0005558317 podman[77321]: 2025-12-13 07:13:50.074470082 +0000 UTC m=+0.105780496 container init 2496c47064739a0730a50459da8e107c9f62171eff56514322be5a5b48d143bc (image=quay.io/ceph/ceph:v20, name=hardcore_leavitt, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:13:50 np0005558317 podman[77321]: 2025-12-13 07:13:50.080617641 +0000 UTC m=+0.111928034 container start 2496c47064739a0730a50459da8e107c9f62171eff56514322be5a5b48d143bc (image=quay.io/ceph/ceph:v20, name=hardcore_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 02:13:50 np0005558317 podman[77321]: 2025-12-13 07:13:49.988662469 +0000 UTC m=+0.019972882 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:50 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:13:50 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:50 np0005558317 ceph-mgr[75200]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 02:13:50 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:50 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:50 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:50 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:50 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Dec 13 02:13:50 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1386230068' entity='client.admin' 
Dec 13 02:13:50 np0005558317 hardcore_leavitt[77341]: set mgr/dashboard/cluster/status
Dec 13 02:13:50 np0005558317 systemd[1]: libpod-2496c47064739a0730a50459da8e107c9f62171eff56514322be5a5b48d143bc.scope: Deactivated successfully.
Dec 13 02:13:51 np0005558317 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:13:51 np0005558317 ceph-mon[74928]: Added label _admin to host compute-0
Dec 13 02:13:51 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/1386230068' entity='client.admin' 
Dec 13 02:13:51 np0005558317 podman[77321]: 2025-12-13 07:13:51.648508501 +0000 UTC m=+1.679818894 container attach 2496c47064739a0730a50459da8e107c9f62171eff56514322be5a5b48d143bc (image=quay.io/ceph/ceph:v20, name=hardcore_leavitt, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:13:51 np0005558317 podman[77321]: 2025-12-13 07:13:51.649059037 +0000 UTC m=+1.680369419 container died 2496c47064739a0730a50459da8e107c9f62171eff56514322be5a5b48d143bc (image=quay.io/ceph/ceph:v20, name=hardcore_leavitt, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:13:51 np0005558317 systemd[1]: var-lib-containers-storage-overlay-d7851f01e58ecfeceb28e5dc18b34d25d1884478a0e5c90d8c4f2a2b00859324-merged.mount: Deactivated successfully.
Dec 13 02:13:51 np0005558317 podman[77321]: 2025-12-13 07:13:51.671268422 +0000 UTC m=+1.702578814 container remove 2496c47064739a0730a50459da8e107c9f62171eff56514322be5a5b48d143bc (image=quay.io/ceph/ceph:v20, name=hardcore_leavitt, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 02:13:51 np0005558317 systemd[1]: Reloading.
Dec 13 02:13:51 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:13:51 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:13:51 np0005558317 podman[77470]: 2025-12-13 07:13:51.811996159 +0000 UTC m=+0.032394685 container create f8758a7744e75ff461484070444dface6fd180749f148d3a6402bb8fd65d6dfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bhabha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 02:13:51 np0005558317 podman[77470]: 2025-12-13 07:13:51.799790472 +0000 UTC m=+0.020189007 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:13:51 np0005558317 systemd[1]: libpod-conmon-2496c47064739a0730a50459da8e107c9f62171eff56514322be5a5b48d143bc.scope: Deactivated successfully.
Dec 13 02:13:51 np0005558317 systemd[1]: Started libpod-conmon-f8758a7744e75ff461484070444dface6fd180749f148d3a6402bb8fd65d6dfd.scope.
Dec 13 02:13:51 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:51 np0005558317 podman[77470]: 2025-12-13 07:13:51.951575979 +0000 UTC m=+0.171974494 container init f8758a7744e75ff461484070444dface6fd180749f148d3a6402bb8fd65d6dfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:13:51 np0005558317 podman[77470]: 2025-12-13 07:13:51.957803476 +0000 UTC m=+0.178202002 container start f8758a7744e75ff461484070444dface6fd180749f148d3a6402bb8fd65d6dfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:13:51 np0005558317 podman[77470]: 2025-12-13 07:13:51.959005697 +0000 UTC m=+0.179404213 container attach f8758a7744e75ff461484070444dface6fd180749f148d3a6402bb8fd65d6dfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:13:51 np0005558317 unruffled_bhabha[77501]: 167 167
Dec 13 02:13:51 np0005558317 systemd[1]: libpod-f8758a7744e75ff461484070444dface6fd180749f148d3a6402bb8fd65d6dfd.scope: Deactivated successfully.
Dec 13 02:13:51 np0005558317 podman[77470]: 2025-12-13 07:13:51.962843492 +0000 UTC m=+0.183242008 container died f8758a7744e75ff461484070444dface6fd180749f148d3a6402bb8fd65d6dfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:13:51 np0005558317 systemd[1]: var-lib-containers-storage-overlay-bc032957a749dd65b25ae0365014108e336f667e366ff812e68ef5da8a5c50cc-merged.mount: Deactivated successfully.
Dec 13 02:13:51 np0005558317 podman[77470]: 2025-12-13 07:13:51.982703761 +0000 UTC m=+0.203102277 container remove f8758a7744e75ff461484070444dface6fd180749f148d3a6402bb8fd65d6dfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:13:51 np0005558317 systemd[1]: libpod-conmon-f8758a7744e75ff461484070444dface6fd180749f148d3a6402bb8fd65d6dfd.scope: Deactivated successfully.
Dec 13 02:13:52 np0005558317 podman[77523]: 2025-12-13 07:13:52.096395138 +0000 UTC m=+0.028162829 container create 15f1b3d796270ddbf6cc38009091f932a84fafe634b4e6fdd759d4e4b23e1278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hodgkin, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 02:13:52 np0005558317 systemd[1]: Started libpod-conmon-15f1b3d796270ddbf6cc38009091f932a84fafe634b4e6fdd759d4e4b23e1278.scope.
Dec 13 02:13:52 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:52 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed154acef1e317a46c6b69d9c911aa863d805d7bfdcb025d96d35669f2a1e464/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:52 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed154acef1e317a46c6b69d9c911aa863d805d7bfdcb025d96d35669f2a1e464/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:52 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed154acef1e317a46c6b69d9c911aa863d805d7bfdcb025d96d35669f2a1e464/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:52 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed154acef1e317a46c6b69d9c911aa863d805d7bfdcb025d96d35669f2a1e464/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:52 np0005558317 podman[77523]: 2025-12-13 07:13:52.155398327 +0000 UTC m=+0.087166029 container init 15f1b3d796270ddbf6cc38009091f932a84fafe634b4e6fdd759d4e4b23e1278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:13:52 np0005558317 podman[77523]: 2025-12-13 07:13:52.159987095 +0000 UTC m=+0.091754776 container start 15f1b3d796270ddbf6cc38009091f932a84fafe634b4e6fdd759d4e4b23e1278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:13:52 np0005558317 podman[77523]: 2025-12-13 07:13:52.161451158 +0000 UTC m=+0.093218839 container attach 15f1b3d796270ddbf6cc38009091f932a84fafe634b4e6fdd759d4e4b23e1278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:13:52 np0005558317 podman[77523]: 2025-12-13 07:13:52.085176847 +0000 UTC m=+0.016944529 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:13:52 np0005558317 ceph-mgr[75200]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 02:13:52 np0005558317 python3[77566]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:13:52 np0005558317 podman[77570]: 2025-12-13 07:13:52.393386041 +0000 UTC m=+0.028875720 container create 3765322c620d7c62d05e528ae327cdcdc515a1bd856ce1a9205518c3cb068d44 (image=quay.io/ceph/ceph:v20, name=nice_chatelet, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:13:52 np0005558317 systemd[1]: Started libpod-conmon-3765322c620d7c62d05e528ae327cdcdc515a1bd856ce1a9205518c3cb068d44.scope.
Dec 13 02:13:52 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:52 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36c37f0822196e2ceca9dd20459e587956bb8b35dd73b42dc6e61330eedc0bd3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:52 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36c37f0822196e2ceca9dd20459e587956bb8b35dd73b42dc6e61330eedc0bd3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:52 np0005558317 podman[77570]: 2025-12-13 07:13:52.434896052 +0000 UTC m=+0.070385731 container init 3765322c620d7c62d05e528ae327cdcdc515a1bd856ce1a9205518c3cb068d44 (image=quay.io/ceph/ceph:v20, name=nice_chatelet, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:13:52 np0005558317 podman[77570]: 2025-12-13 07:13:52.439395913 +0000 UTC m=+0.074885591 container start 3765322c620d7c62d05e528ae327cdcdc515a1bd856ce1a9205518c3cb068d44 (image=quay.io/ceph/ceph:v20, name=nice_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:13:52 np0005558317 podman[77570]: 2025-12-13 07:13:52.441467156 +0000 UTC m=+0.076956856 container attach 3765322c620d7c62d05e528ae327cdcdc515a1bd856ce1a9205518c3cb068d44 (image=quay.io/ceph/ceph:v20, name=nice_chatelet, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 02:13:52 np0005558317 podman[77570]: 2025-12-13 07:13:52.383251688 +0000 UTC m=+0.018741387 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]: [
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:    {
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:        "available": false,
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:        "being_replaced": false,
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:        "ceph_device_lvm": false,
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:        "lsm_data": {},
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:        "lvs": [],
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:        "path": "/dev/sr0",
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:        "rejected_reasons": [
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            "Insufficient space (<5GB)",
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            "Has a FileSystem"
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:        ],
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:        "sys_api": {
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            "actuators": null,
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            "device_nodes": [
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:                "sr0"
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            ],
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            "devname": "sr0",
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            "human_readable_size": "474.00 KB",
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            "id_bus": "ata",
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            "model": "QEMU DVD-ROM",
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            "nr_requests": "64",
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            "parent": "/dev/sr0",
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            "partitions": {},
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            "path": "/dev/sr0",
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            "removable": "1",
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            "rev": "2.5+",
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            "ro": "0",
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            "rotational": "1",
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            "sas_address": "",
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            "sas_device_handle": "",
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            "scheduler_mode": "mq-deadline",
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            "sectors": 0,
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            "sectorsize": "2048",
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            "size": 485376.0,
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            "support_discard": "2048",
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            "type": "disk",
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:            "vendor": "QEMU"
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:        }
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]:    }
Dec 13 02:13:52 np0005558317 dazzling_hodgkin[77536]: ]
Dec 13 02:13:52 np0005558317 systemd[1]: libpod-15f1b3d796270ddbf6cc38009091f932a84fafe634b4e6fdd759d4e4b23e1278.scope: Deactivated successfully.
Dec 13 02:13:52 np0005558317 podman[77523]: 2025-12-13 07:13:52.542120502 +0000 UTC m=+0.473888183 container died 15f1b3d796270ddbf6cc38009091f932a84fafe634b4e6fdd759d4e4b23e1278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hodgkin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:13:52 np0005558317 podman[77523]: 2025-12-13 07:13:52.560917672 +0000 UTC m=+0.492685343 container remove 15f1b3d796270ddbf6cc38009091f932a84fafe634b4e6fdd759d4e4b23e1278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hodgkin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 02:13:52 np0005558317 systemd[1]: libpod-conmon-15f1b3d796270ddbf6cc38009091f932a84fafe634b4e6fdd759d4e4b23e1278.scope: Deactivated successfully.
Dec 13 02:13:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:13:52 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:13:52 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:13:52 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:13:52 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 02:13:52 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 02:13:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:13:52 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:13:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:13:52 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:13:52 np0005558317 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 13 02:13:52 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 13 02:13:52 np0005558317 systemd[1]: var-lib-containers-storage-overlay-ed154acef1e317a46c6b69d9c911aa863d805d7bfdcb025d96d35669f2a1e464-merged.mount: Deactivated successfully.
Dec 13 02:13:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054702 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:13:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Dec 13 02:13:52 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3320068912' entity='client.admin' 
Dec 13 02:13:52 np0005558317 systemd[1]: libpod-3765322c620d7c62d05e528ae327cdcdc515a1bd856ce1a9205518c3cb068d44.scope: Deactivated successfully.
Dec 13 02:13:52 np0005558317 podman[77570]: 2025-12-13 07:13:52.775384614 +0000 UTC m=+0.410874293 container died 3765322c620d7c62d05e528ae327cdcdc515a1bd856ce1a9205518c3cb068d44 (image=quay.io/ceph/ceph:v20, name=nice_chatelet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 02:13:52 np0005558317 systemd[1]: var-lib-containers-storage-overlay-36c37f0822196e2ceca9dd20459e587956bb8b35dd73b42dc6e61330eedc0bd3-merged.mount: Deactivated successfully.
Dec 13 02:13:52 np0005558317 podman[77570]: 2025-12-13 07:13:52.799305898 +0000 UTC m=+0.434795577 container remove 3765322c620d7c62d05e528ae327cdcdc515a1bd856ce1a9205518c3cb068d44 (image=quay.io/ceph/ceph:v20, name=nice_chatelet, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 02:13:52 np0005558317 systemd[1]: libpod-conmon-3765322c620d7c62d05e528ae327cdcdc515a1bd856ce1a9205518c3cb068d44.scope: Deactivated successfully.
Dec 13 02:13:53 np0005558317 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:13:53 np0005558317 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config/ceph.conf
Dec 13 02:13:53 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config/ceph.conf
Dec 13 02:13:53 np0005558317 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 13 02:13:53 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 13 02:13:53 np0005558317 ansible-async_wrapper.py[78808]: Invoked with j795524582964 30 /home/zuul/.ansible/tmp/ansible-tmp-1765610033.0849185-36867-51326405911085/AnsiballZ_command.py _
Dec 13 02:13:53 np0005558317 ansible-async_wrapper.py[78879]: Starting module and watcher
Dec 13 02:13:53 np0005558317 ansible-async_wrapper.py[78879]: Start watching 78880 (30)
Dec 13 02:13:53 np0005558317 ansible-async_wrapper.py[78880]: Start module (78880)
Dec 13 02:13:53 np0005558317 ansible-async_wrapper.py[78808]: Return async_wrapper task started.
Dec 13 02:13:53 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:53 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:53 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:53 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:53 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 02:13:53 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:13:53 np0005558317 ceph-mon[74928]: Updating compute-0:/etc/ceph/ceph.conf
Dec 13 02:13:53 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/3320068912' entity='client.admin' 
Dec 13 02:13:53 np0005558317 ceph-mon[74928]: Updating compute-0:/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config/ceph.conf
Dec 13 02:13:53 np0005558317 ceph-mon[74928]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 13 02:13:53 np0005558317 python3[78882]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:13:53 np0005558317 podman[78944]: 2025-12-13 07:13:53.715844171 +0000 UTC m=+0.032246518 container create ceea0c13a46f6be379ef5012ef9afdaad7cf0383c5c1aabb568fec8654474156 (image=quay.io/ceph/ceph:v20, name=naughty_germain, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 02:13:53 np0005558317 systemd[1]: Started libpod-conmon-ceea0c13a46f6be379ef5012ef9afdaad7cf0383c5c1aabb568fec8654474156.scope.
Dec 13 02:13:53 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:53 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04bc46ecc7846dde6377a8007e7182475c1eeb21760ca8ecaf54e1b64424c8fa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:53 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04bc46ecc7846dde6377a8007e7182475c1eeb21760ca8ecaf54e1b64424c8fa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:53 np0005558317 podman[78944]: 2025-12-13 07:13:53.763305111 +0000 UTC m=+0.079707467 container init ceea0c13a46f6be379ef5012ef9afdaad7cf0383c5c1aabb568fec8654474156 (image=quay.io/ceph/ceph:v20, name=naughty_germain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:13:53 np0005558317 podman[78944]: 2025-12-13 07:13:53.769252692 +0000 UTC m=+0.085655038 container start ceea0c13a46f6be379ef5012ef9afdaad7cf0383c5c1aabb568fec8654474156 (image=quay.io/ceph/ceph:v20, name=naughty_germain, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Dec 13 02:13:53 np0005558317 podman[78944]: 2025-12-13 07:13:53.771240049 +0000 UTC m=+0.087642405 container attach ceea0c13a46f6be379ef5012ef9afdaad7cf0383c5c1aabb568fec8654474156 (image=quay.io/ceph/ceph:v20, name=naughty_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 02:13:53 np0005558317 podman[78944]: 2025-12-13 07:13:53.701309665 +0000 UTC m=+0.017712031 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:53 np0005558317 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config/ceph.client.admin.keyring
Dec 13 02:13:53 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config/ceph.client.admin.keyring
Dec 13 02:13:54 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 02:13:54 np0005558317 naughty_germain[78990]: 
Dec 13 02:13:54 np0005558317 naughty_germain[78990]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 13 02:13:54 np0005558317 systemd[1]: libpod-ceea0c13a46f6be379ef5012ef9afdaad7cf0383c5c1aabb568fec8654474156.scope: Deactivated successfully.
Dec 13 02:13:54 np0005558317 podman[78944]: 2025-12-13 07:13:54.122834911 +0000 UTC m=+0.439237257 container died ceea0c13a46f6be379ef5012ef9afdaad7cf0383c5c1aabb568fec8654474156 (image=quay.io/ceph/ceph:v20, name=naughty_germain, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 02:13:54 np0005558317 systemd[1]: var-lib-containers-storage-overlay-04bc46ecc7846dde6377a8007e7182475c1eeb21760ca8ecaf54e1b64424c8fa-merged.mount: Deactivated successfully.
Dec 13 02:13:54 np0005558317 podman[78944]: 2025-12-13 07:13:54.148244533 +0000 UTC m=+0.464646879 container remove ceea0c13a46f6be379ef5012ef9afdaad7cf0383c5c1aabb568fec8654474156 (image=quay.io/ceph/ceph:v20, name=naughty_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:13:54 np0005558317 systemd[1]: libpod-conmon-ceea0c13a46f6be379ef5012ef9afdaad7cf0383c5c1aabb568fec8654474156.scope: Deactivated successfully.
Dec 13 02:13:54 np0005558317 ansible-async_wrapper.py[78880]: Module complete (78880)
Dec 13 02:13:54 np0005558317 ceph-mgr[75200]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 02:13:54 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:13:54 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:54 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:13:54 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:54 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:13:54 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:54 np0005558317 ceph-mgr[75200]: [progress INFO root] update: starting ev a606794a-c965-49eb-88dd-648db5ecfed2 (Updating crash deployment (+1 -> 1))
Dec 13 02:13:54 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 13 02:13:54 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 13 02:13:54 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 13 02:13:54 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:13:54 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:13:54 np0005558317 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec 13 02:13:54 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec 13 02:13:54 np0005558317 podman[79443]: 2025-12-13 07:13:54.717727118 +0000 UTC m=+0.026309053 container create 991ecccef7f6c0fac0eee4769d013507b751594afca639e6a78ade442cc3828e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_poitras, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 02:13:54 np0005558317 systemd[1]: Started libpod-conmon-991ecccef7f6c0fac0eee4769d013507b751594afca639e6a78ade442cc3828e.scope.
Dec 13 02:13:54 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:54 np0005558317 podman[79443]: 2025-12-13 07:13:54.759240876 +0000 UTC m=+0.067822812 container init 991ecccef7f6c0fac0eee4769d013507b751594afca639e6a78ade442cc3828e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:13:54 np0005558317 podman[79443]: 2025-12-13 07:13:54.765270613 +0000 UTC m=+0.073852549 container start 991ecccef7f6c0fac0eee4769d013507b751594afca639e6a78ade442cc3828e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_poitras, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 02:13:54 np0005558317 podman[79443]: 2025-12-13 07:13:54.766429332 +0000 UTC m=+0.075011266 container attach 991ecccef7f6c0fac0eee4769d013507b751594afca639e6a78ade442cc3828e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_poitras, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 02:13:54 np0005558317 gracious_poitras[79476]: 167 167
Dec 13 02:13:54 np0005558317 systemd[1]: libpod-991ecccef7f6c0fac0eee4769d013507b751594afca639e6a78ade442cc3828e.scope: Deactivated successfully.
Dec 13 02:13:54 np0005558317 podman[79443]: 2025-12-13 07:13:54.768344242 +0000 UTC m=+0.076926177 container died 991ecccef7f6c0fac0eee4769d013507b751594afca639e6a78ade442cc3828e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_poitras, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 02:13:54 np0005558317 systemd[1]: var-lib-containers-storage-overlay-3f4c9349fac31bfe111eda279c3f74526e525167c24ee2c1fbff467903d6dcfd-merged.mount: Deactivated successfully.
Dec 13 02:13:54 np0005558317 podman[79443]: 2025-12-13 07:13:54.784198599 +0000 UTC m=+0.092780534 container remove 991ecccef7f6c0fac0eee4769d013507b751594afca639e6a78ade442cc3828e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_poitras, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 02:13:54 np0005558317 podman[79443]: 2025-12-13 07:13:54.707578127 +0000 UTC m=+0.016160072 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:13:54 np0005558317 systemd[1]: libpod-conmon-991ecccef7f6c0fac0eee4769d013507b751594afca639e6a78ade442cc3828e.scope: Deactivated successfully.
Dec 13 02:13:54 np0005558317 systemd[1]: Reloading.
Dec 13 02:13:54 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:13:54 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:13:54 np0005558317 python3[79485]: ansible-ansible.legacy.async_status Invoked with jid=j795524582964.78808 mode=status _async_dir=/root/.ansible_async
Dec 13 02:13:55 np0005558317 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:13:55 np0005558317 systemd[1]: Reloading.
Dec 13 02:13:55 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:13:55 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:13:55 np0005558317 python3[79583]: ansible-ansible.legacy.async_status Invoked with jid=j795524582964.78808 mode=cleanup _async_dir=/root/.ansible_async
Dec 13 02:13:55 np0005558317 systemd[1]: Starting Ceph crash.compute-0 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de...
Dec 13 02:13:55 np0005558317 ceph-mon[74928]: Updating compute-0:/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config/ceph.client.admin.keyring
Dec 13 02:13:55 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:55 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:55 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:55 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 13 02:13:55 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 13 02:13:55 np0005558317 ceph-mon[74928]: Deploying daemon crash.compute-0 on compute-0
Dec 13 02:13:55 np0005558317 podman[79662]: 2025-12-13 07:13:55.407918064 +0000 UTC m=+0.029913200 container create 8e6a4f61ea03b0deb3b22f2359f4e4ace46ba5e323139a1f6359205360a9b0cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-crash-compute-0, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Dec 13 02:13:55 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3897b70dc7cab4fb8705dd286c4e00c11f6f6071eb2e509679c4e4e93e2d82ee/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:55 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3897b70dc7cab4fb8705dd286c4e00c11f6f6071eb2e509679c4e4e93e2d82ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:55 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3897b70dc7cab4fb8705dd286c4e00c11f6f6071eb2e509679c4e4e93e2d82ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:55 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3897b70dc7cab4fb8705dd286c4e00c11f6f6071eb2e509679c4e4e93e2d82ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:55 np0005558317 podman[79662]: 2025-12-13 07:13:55.448717228 +0000 UTC m=+0.070712364 container init 8e6a4f61ea03b0deb3b22f2359f4e4ace46ba5e323139a1f6359205360a9b0cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-crash-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 02:13:55 np0005558317 podman[79662]: 2025-12-13 07:13:55.453975935 +0000 UTC m=+0.075971062 container start 8e6a4f61ea03b0deb3b22f2359f4e4ace46ba5e323139a1f6359205360a9b0cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-crash-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 02:13:55 np0005558317 bash[79662]: 8e6a4f61ea03b0deb3b22f2359f4e4ace46ba5e323139a1f6359205360a9b0cc
Dec 13 02:13:55 np0005558317 podman[79662]: 2025-12-13 07:13:55.396423633 +0000 UTC m=+0.018418759 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:13:55 np0005558317 systemd[1]: Started Ceph crash.compute-0 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 02:13:55 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:13:55 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:55 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-crash-compute-0[79701]: INFO:ceph-crash:pinging cluster to exercise our key
Dec 13 02:13:55 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:13:55 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:55 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 13 02:13:55 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:55 np0005558317 ceph-mgr[75200]: [progress INFO root] complete: finished ev a606794a-c965-49eb-88dd-648db5ecfed2 (Updating crash deployment (+1 -> 1))
Dec 13 02:13:55 np0005558317 ceph-mgr[75200]: [progress INFO root] Completed event a606794a-c965-49eb-88dd-648db5ecfed2 (Updating crash deployment (+1 -> 1)) in 1 seconds
Dec 13 02:13:55 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 13 02:13:55 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:55 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 13 02:13:55 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:55 np0005558317 ceph-mgr[75200]: [progress INFO root] update: starting ev 9fd020bf-18ac-442a-a411-2432b8213490 (Updating mgr deployment (+1 -> 2))
Dec 13 02:13:55 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.ndpimg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 13 02:13:55 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.ndpimg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 13 02:13:55 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ndpimg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 13 02:13:55 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 13 02:13:55 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "mgr services"} : dispatch
Dec 13 02:13:55 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:13:55 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:13:55 np0005558317 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.ndpimg on compute-0
Dec 13 02:13:55 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.ndpimg on compute-0
Dec 13 02:13:55 np0005558317 python3[79698]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 02:13:55 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-crash-compute-0[79701]: 2025-12-13T07:13:55.581+0000 7f1275645640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 13 02:13:55 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-crash-compute-0[79701]: 2025-12-13T07:13:55.581+0000 7f1275645640 -1 AuthRegistry(0x7f1270052d90) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 13 02:13:55 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-crash-compute-0[79701]: 2025-12-13T07:13:55.585+0000 7f1275645640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 13 02:13:55 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-crash-compute-0[79701]: 2025-12-13T07:13:55.585+0000 7f1275645640 -1 AuthRegistry(0x7f1275643fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 13 02:13:55 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-crash-compute-0[79701]: 2025-12-13T07:13:55.586+0000 7f126effd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 13 02:13:55 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-crash-compute-0[79701]: 2025-12-13T07:13:55.587+0000 7f1275645640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec 13 02:13:55 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-crash-compute-0[79701]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec 13 02:13:55 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-crash-compute-0[79701]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec 13 02:13:55 np0005558317 podman[79829]: 2025-12-13 07:13:55.907141197 +0000 UTC m=+0.028940171 container create 671ca014afe2ed695d1fce2d6df1dc620b9f9e0f76a6d2c8789ed912402534a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:13:55 np0005558317 systemd[1]: Started libpod-conmon-671ca014afe2ed695d1fce2d6df1dc620b9f9e0f76a6d2c8789ed912402534a3.scope.
Dec 13 02:13:55 np0005558317 python3[79811]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:13:55 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:55 np0005558317 podman[79829]: 2025-12-13 07:13:55.953816629 +0000 UTC m=+0.075615623 container init 671ca014afe2ed695d1fce2d6df1dc620b9f9e0f76a6d2c8789ed912402534a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:13:55 np0005558317 podman[79829]: 2025-12-13 07:13:55.960118247 +0000 UTC m=+0.081917221 container start 671ca014afe2ed695d1fce2d6df1dc620b9f9e0f76a6d2c8789ed912402534a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_knuth, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 02:13:55 np0005558317 podman[79829]: 2025-12-13 07:13:55.96351894 +0000 UTC m=+0.085317914 container attach 671ca014afe2ed695d1fce2d6df1dc620b9f9e0f76a6d2c8789ed912402534a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_knuth, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:13:55 np0005558317 clever_knuth[79842]: 167 167
Dec 13 02:13:55 np0005558317 podman[79829]: 2025-12-13 07:13:55.965176837 +0000 UTC m=+0.086975812 container died 671ca014afe2ed695d1fce2d6df1dc620b9f9e0f76a6d2c8789ed912402534a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_knuth, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 02:13:55 np0005558317 systemd[1]: libpod-671ca014afe2ed695d1fce2d6df1dc620b9f9e0f76a6d2c8789ed912402534a3.scope: Deactivated successfully.
Dec 13 02:13:55 np0005558317 systemd[1]: var-lib-containers-storage-overlay-3f7151c6361dc9d5636e3688b90b3ba75de0aa09b025f71d95c88ac41bb4ba95-merged.mount: Deactivated successfully.
Dec 13 02:13:55 np0005558317 podman[79844]: 2025-12-13 07:13:55.980339313 +0000 UTC m=+0.037627704 container create 8f61abaa953fa280a84dd573546b218c0a249cddf5c186aff4a4bb28d3bdb5d4 (image=quay.io/ceph/ceph:v20, name=vibrant_solomon, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Dec 13 02:13:55 np0005558317 podman[79829]: 2025-12-13 07:13:55.982509003 +0000 UTC m=+0.104307977 container remove 671ca014afe2ed695d1fce2d6df1dc620b9f9e0f76a6d2c8789ed912402534a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:13:55 np0005558317 podman[79829]: 2025-12-13 07:13:55.896084379 +0000 UTC m=+0.017883363 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:13:55 np0005558317 systemd[1]: libpod-conmon-671ca014afe2ed695d1fce2d6df1dc620b9f9e0f76a6d2c8789ed912402534a3.scope: Deactivated successfully.
Dec 13 02:13:56 np0005558317 systemd[1]: Started libpod-conmon-8f61abaa953fa280a84dd573546b218c0a249cddf5c186aff4a4bb28d3bdb5d4.scope.
Dec 13 02:13:56 np0005558317 systemd[1]: Reloading.
Dec 13 02:13:56 np0005558317 podman[79844]: 2025-12-13 07:13:55.962537616 +0000 UTC m=+0.019826016 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:56 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:13:56 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:13:56 np0005558317 ceph-mgr[75200]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 02:13:56 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:56 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbe0c9a9eb4baa3442a907d4c209ad3a4977903a2a0ba4a425e2f6a48f3e6455/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:56 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbe0c9a9eb4baa3442a907d4c209ad3a4977903a2a0ba4a425e2f6a48f3e6455/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:56 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbe0c9a9eb4baa3442a907d4c209ad3a4977903a2a0ba4a425e2f6a48f3e6455/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:56 np0005558317 podman[79844]: 2025-12-13 07:13:56.228492408 +0000 UTC m=+0.285780788 container init 8f61abaa953fa280a84dd573546b218c0a249cddf5c186aff4a4bb28d3bdb5d4 (image=quay.io/ceph/ceph:v20, name=vibrant_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Dec 13 02:13:56 np0005558317 podman[79844]: 2025-12-13 07:13:56.234896448 +0000 UTC m=+0.292184848 container start 8f61abaa953fa280a84dd573546b218c0a249cddf5c186aff4a4bb28d3bdb5d4 (image=quay.io/ceph/ceph:v20, name=vibrant_solomon, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 02:13:56 np0005558317 podman[79844]: 2025-12-13 07:13:56.235929921 +0000 UTC m=+0.293218311 container attach 8f61abaa953fa280a84dd573546b218c0a249cddf5c186aff4a4bb28d3bdb5d4 (image=quay.io/ceph/ceph:v20, name=vibrant_solomon, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:13:56 np0005558317 systemd[1]: Reloading.
Dec 13 02:13:56 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:13:56 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:13:56 np0005558317 systemd[1]: Starting Ceph mgr.compute-0.ndpimg for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de...
Dec 13 02:13:56 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:56 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:56 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:56 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:56 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:56 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.ndpimg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 13 02:13:56 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ndpimg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 13 02:13:56 np0005558317 ceph-mon[74928]: Deploying daemon mgr.compute-0.ndpimg on compute-0
Dec 13 02:13:56 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 02:13:56 np0005558317 vibrant_solomon[79870]: 
Dec 13 02:13:56 np0005558317 vibrant_solomon[79870]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 13 02:13:56 np0005558317 systemd[1]: libpod-8f61abaa953fa280a84dd573546b218c0a249cddf5c186aff4a4bb28d3bdb5d4.scope: Deactivated successfully.
Dec 13 02:13:56 np0005558317 podman[79844]: 2025-12-13 07:13:56.582578594 +0000 UTC m=+0.639866975 container died 8f61abaa953fa280a84dd573546b218c0a249cddf5c186aff4a4bb28d3bdb5d4 (image=quay.io/ceph/ceph:v20, name=vibrant_solomon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Dec 13 02:13:56 np0005558317 systemd[1]: var-lib-containers-storage-overlay-bbe0c9a9eb4baa3442a907d4c209ad3a4977903a2a0ba4a425e2f6a48f3e6455-merged.mount: Deactivated successfully.
Dec 13 02:13:56 np0005558317 podman[79844]: 2025-12-13 07:13:56.613863462 +0000 UTC m=+0.671151852 container remove 8f61abaa953fa280a84dd573546b218c0a249cddf5c186aff4a4bb28d3bdb5d4 (image=quay.io/ceph/ceph:v20, name=vibrant_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 02:13:56 np0005558317 systemd[1]: libpod-conmon-8f61abaa953fa280a84dd573546b218c0a249cddf5c186aff4a4bb28d3bdb5d4.scope: Deactivated successfully.
Dec 13 02:13:56 np0005558317 podman[80008]: 2025-12-13 07:13:56.646007224 +0000 UTC m=+0.052293325 container create 0ff35a5463d3bbc7d852b4c61818e209fc4953caeb52224a5f4e7570eb1a0d88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-ndpimg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 02:13:56 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9801f212c1a44328b5743c56063ebc515c709dd2f1586ea86202bedf61a16de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:56 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9801f212c1a44328b5743c56063ebc515c709dd2f1586ea86202bedf61a16de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:56 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9801f212c1a44328b5743c56063ebc515c709dd2f1586ea86202bedf61a16de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:56 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9801f212c1a44328b5743c56063ebc515c709dd2f1586ea86202bedf61a16de/merged/var/lib/ceph/mgr/ceph-compute-0.ndpimg supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:56 np0005558317 podman[80008]: 2025-12-13 07:13:56.684778637 +0000 UTC m=+0.091064758 container init 0ff35a5463d3bbc7d852b4c61818e209fc4953caeb52224a5f4e7570eb1a0d88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-ndpimg, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 02:13:56 np0005558317 podman[80008]: 2025-12-13 07:13:56.689406568 +0000 UTC m=+0.095692669 container start 0ff35a5463d3bbc7d852b4c61818e209fc4953caeb52224a5f4e7570eb1a0d88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-ndpimg, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 02:13:56 np0005558317 bash[80008]: 0ff35a5463d3bbc7d852b4c61818e209fc4953caeb52224a5f4e7570eb1a0d88
Dec 13 02:13:56 np0005558317 podman[80008]: 2025-12-13 07:13:56.633533362 +0000 UTC m=+0.039819483 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:13:56 np0005558317 systemd[1]: Started Ceph mgr.compute-0.ndpimg for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 02:13:56 np0005558317 ceph-mgr[80033]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 02:13:56 np0005558317 ceph-mgr[80033]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec 13 02:13:56 np0005558317 ceph-mgr[80033]: pidfile_write: ignore empty --pid-file
Dec 13 02:13:56 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:13:56 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:56 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:13:56 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:56 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 13 02:13:56 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:56 np0005558317 ceph-mgr[75200]: [progress INFO root] complete: finished ev 9fd020bf-18ac-442a-a411-2432b8213490 (Updating mgr deployment (+1 -> 2))
Dec 13 02:13:56 np0005558317 ceph-mgr[75200]: [progress INFO root] Completed event 9fd020bf-18ac-442a-a411-2432b8213490 (Updating mgr deployment (+1 -> 2)) in 1 seconds
Dec 13 02:13:56 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 13 02:13:56 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:56 np0005558317 ceph-mgr[80033]: mgr[py] Loading python module 'alerts'
Dec 13 02:13:56 np0005558317 ceph-mgr[80033]: mgr[py] Loading python module 'balancer'
Dec 13 02:13:56 np0005558317 ceph-mgr[80033]: mgr[py] Loading python module 'cephadm'
Dec 13 02:13:56 np0005558317 python3[80153]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:13:57 np0005558317 podman[80155]: 2025-12-13 07:13:57.030314618 +0000 UTC m=+0.036193237 container create abb29f84e0e380833c1552e9ce55abe5d64c5d30cf4018d326ccb8a38616b1b2 (image=quay.io/ceph/ceph:v20, name=epic_leavitt, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 02:13:57 np0005558317 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:13:57 np0005558317 systemd[1]: Started libpod-conmon-abb29f84e0e380833c1552e9ce55abe5d64c5d30cf4018d326ccb8a38616b1b2.scope.
Dec 13 02:13:57 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:57 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e319d0654aa2e360cdbaea7a828ed62213e5c8c9cb6e633ff2cd10d7be535a7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:57 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e319d0654aa2e360cdbaea7a828ed62213e5c8c9cb6e633ff2cd10d7be535a7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:57 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e319d0654aa2e360cdbaea7a828ed62213e5c8c9cb6e633ff2cd10d7be535a7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:57 np0005558317 podman[80155]: 2025-12-13 07:13:57.080055675 +0000 UTC m=+0.085934304 container init abb29f84e0e380833c1552e9ce55abe5d64c5d30cf4018d326ccb8a38616b1b2 (image=quay.io/ceph/ceph:v20, name=epic_leavitt, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:13:57 np0005558317 podman[80155]: 2025-12-13 07:13:57.086512674 +0000 UTC m=+0.092391293 container start abb29f84e0e380833c1552e9ce55abe5d64c5d30cf4018d326ccb8a38616b1b2 (image=quay.io/ceph/ceph:v20, name=epic_leavitt, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:13:57 np0005558317 podman[80155]: 2025-12-13 07:13:57.087850179 +0000 UTC m=+0.093728798 container attach abb29f84e0e380833c1552e9ce55abe5d64c5d30cf4018d326ccb8a38616b1b2 (image=quay.io/ceph/ceph:v20, name=epic_leavitt, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 02:13:57 np0005558317 podman[80155]: 2025-12-13 07:13:57.018553566 +0000 UTC m=+0.024432205 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:57 np0005558317 podman[80215]: 2025-12-13 07:13:57.22710685 +0000 UTC m=+0.043413271 container exec 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:13:57 np0005558317 podman[80215]: 2025-12-13 07:13:57.30116235 +0000 UTC m=+0.117468771 container exec_died 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1962733887' entity='client.admin' 
Dec 13 02:13:57 np0005558317 systemd[1]: libpod-abb29f84e0e380833c1552e9ce55abe5d64c5d30cf4018d326ccb8a38616b1b2.scope: Deactivated successfully.
Dec 13 02:13:57 np0005558317 podman[80155]: 2025-12-13 07:13:57.438272617 +0000 UTC m=+0.444151226 container died abb29f84e0e380833c1552e9ce55abe5d64c5d30cf4018d326ccb8a38616b1b2 (image=quay.io/ceph/ceph:v20, name=epic_leavitt, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 02:13:57 np0005558317 systemd[1]: var-lib-containers-storage-overlay-3e319d0654aa2e360cdbaea7a828ed62213e5c8c9cb6e633ff2cd10d7be535a7-merged.mount: Deactivated successfully.
Dec 13 02:13:57 np0005558317 podman[80155]: 2025-12-13 07:13:57.472074997 +0000 UTC m=+0.477953616 container remove abb29f84e0e380833c1552e9ce55abe5d64c5d30cf4018d326ccb8a38616b1b2 (image=quay.io/ceph/ceph:v20, name=epic_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:13:57 np0005558317 systemd[1]: libpod-conmon-abb29f84e0e380833c1552e9ce55abe5d64c5d30cf4018d326ccb8a38616b1b2.scope: Deactivated successfully.
Dec 13 02:13:57 np0005558317 ceph-mgr[80033]: mgr[py] Loading python module 'crash'
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:57 np0005558317 ceph-mgr[80033]: mgr[py] Loading python module 'dashboard'
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:13:57 np0005558317 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec 13 02:13:57 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:13:57 np0005558317 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec 13 02:13:57 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec 13 02:13:57 np0005558317 python3[80364]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/1962733887' entity='client.admin' 
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:57 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 13 02:13:57 np0005558317 podman[80414]: 2025-12-13 07:13:57.790561032 +0000 UTC m=+0.041704488 container create 2d87ef2841aa716899a3c197a635034c745a3e6db883a0d7aee2ca9894f8d6d8 (image=quay.io/ceph/ceph:v20, name=hardcore_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 02:13:57 np0005558317 systemd[1]: Started libpod-conmon-2d87ef2841aa716899a3c197a635034c745a3e6db883a0d7aee2ca9894f8d6d8.scope.
Dec 13 02:13:57 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:57 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f46e7c661ec1f0eee69d3a7deda7130356902892e720de21f8663b91e970dd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:57 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f46e7c661ec1f0eee69d3a7deda7130356902892e720de21f8663b91e970dd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:57 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f46e7c661ec1f0eee69d3a7deda7130356902892e720de21f8663b91e970dd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:57 np0005558317 podman[80414]: 2025-12-13 07:13:57.840748398 +0000 UTC m=+0.091891863 container init 2d87ef2841aa716899a3c197a635034c745a3e6db883a0d7aee2ca9894f8d6d8 (image=quay.io/ceph/ceph:v20, name=hardcore_hopper, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 02:13:57 np0005558317 podman[80414]: 2025-12-13 07:13:57.845216338 +0000 UTC m=+0.096359793 container start 2d87ef2841aa716899a3c197a635034c745a3e6db883a0d7aee2ca9894f8d6d8 (image=quay.io/ceph/ceph:v20, name=hardcore_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:13:57 np0005558317 podman[80414]: 2025-12-13 07:13:57.846303241 +0000 UTC m=+0.097446686 container attach 2d87ef2841aa716899a3c197a635034c745a3e6db883a0d7aee2ca9894f8d6d8 (image=quay.io/ceph/ceph:v20, name=hardcore_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 02:13:57 np0005558317 podman[80414]: 2025-12-13 07:13:57.778066552 +0000 UTC m=+0.029210007 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:58 np0005558317 podman[80489]: 2025-12-13 07:13:58.049932758 +0000 UTC m=+0.027246516 container create df8c0ae5491626a85a2b282c2dbdc7a0a22b0db3461505cf7d6733767d26d40a (image=quay.io/ceph/ceph:v20, name=hardcore_hermann, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Dec 13 02:13:58 np0005558317 systemd[1]: Started libpod-conmon-df8c0ae5491626a85a2b282c2dbdc7a0a22b0db3461505cf7d6733767d26d40a.scope.
Dec 13 02:13:58 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:58 np0005558317 podman[80489]: 2025-12-13 07:13:58.088046855 +0000 UTC m=+0.065360622 container init df8c0ae5491626a85a2b282c2dbdc7a0a22b0db3461505cf7d6733767d26d40a (image=quay.io/ceph/ceph:v20, name=hardcore_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 02:13:58 np0005558317 podman[80489]: 2025-12-13 07:13:58.091896804 +0000 UTC m=+0.069210561 container start df8c0ae5491626a85a2b282c2dbdc7a0a22b0db3461505cf7d6733767d26d40a (image=quay.io/ceph/ceph:v20, name=hardcore_hermann, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:13:58 np0005558317 podman[80489]: 2025-12-13 07:13:58.093515938 +0000 UTC m=+0.070829695 container attach df8c0ae5491626a85a2b282c2dbdc7a0a22b0db3461505cf7d6733767d26d40a (image=quay.io/ceph/ceph:v20, name=hardcore_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 02:13:58 np0005558317 hardcore_hermann[80501]: 167 167
Dec 13 02:13:58 np0005558317 systemd[1]: libpod-df8c0ae5491626a85a2b282c2dbdc7a0a22b0db3461505cf7d6733767d26d40a.scope: Deactivated successfully.
Dec 13 02:13:58 np0005558317 podman[80506]: 2025-12-13 07:13:58.125333066 +0000 UTC m=+0.021583760 container died df8c0ae5491626a85a2b282c2dbdc7a0a22b0db3461505cf7d6733767d26d40a (image=quay.io/ceph/ceph:v20, name=hardcore_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 02:13:58 np0005558317 systemd[1]: var-lib-containers-storage-overlay-9d71c05acfa5e92d7e056f3576202eb85611d65538b171ecaa093dd2658c6bd4-merged.mount: Deactivated successfully.
Dec 13 02:13:58 np0005558317 podman[80489]: 2025-12-13 07:13:58.038815907 +0000 UTC m=+0.016129685 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:58 np0005558317 podman[80506]: 2025-12-13 07:13:58.143599439 +0000 UTC m=+0.039850122 container remove df8c0ae5491626a85a2b282c2dbdc7a0a22b0db3461505cf7d6733767d26d40a (image=quay.io/ceph/ceph:v20, name=hardcore_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:13:58 np0005558317 systemd[1]: libpod-conmon-df8c0ae5491626a85a2b282c2dbdc7a0a22b0db3461505cf7d6733767d26d40a.scope: Deactivated successfully.
Dec 13 02:13:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Dec 13 02:13:58 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1381470394' entity='client.admin' 
Dec 13 02:13:58 np0005558317 systemd[1]: libpod-2d87ef2841aa716899a3c197a635034c745a3e6db883a0d7aee2ca9894f8d6d8.scope: Deactivated successfully.
Dec 13 02:13:58 np0005558317 podman[80414]: 2025-12-13 07:13:58.17662604 +0000 UTC m=+0.427769485 container died 2d87ef2841aa716899a3c197a635034c745a3e6db883a0d7aee2ca9894f8d6d8 (image=quay.io/ceph/ceph:v20, name=hardcore_hopper, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 02:13:58 np0005558317 ceph-mgr[75200]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec 13 02:13:58 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:13:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:13:58 np0005558317 ceph-mon[74928]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 13 02:13:58 np0005558317 systemd[1]: var-lib-containers-storage-overlay-98f46e7c661ec1f0eee69d3a7deda7130356902892e720de21f8663b91e970dd-merged.mount: Deactivated successfully.
Dec 13 02:13:58 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:13:58 np0005558317 podman[80414]: 2025-12-13 07:13:58.199113248 +0000 UTC m=+0.450256692 container remove 2d87ef2841aa716899a3c197a635034c745a3e6db883a0d7aee2ca9894f8d6d8 (image=quay.io/ceph/ceph:v20, name=hardcore_hopper, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 02:13:58 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:58 np0005558317 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.qsherl (unknown last config time)...
Dec 13 02:13:58 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.qsherl (unknown last config time)...
Dec 13 02:13:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.qsherl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 13 02:13:58 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.qsherl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 13 02:13:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 13 02:13:58 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "mgr services"} : dispatch
Dec 13 02:13:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:13:58 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:13:58 np0005558317 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.qsherl on compute-0
Dec 13 02:13:58 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.qsherl on compute-0
Dec 13 02:13:58 np0005558317 systemd[1]: libpod-conmon-2d87ef2841aa716899a3c197a635034c745a3e6db883a0d7aee2ca9894f8d6d8.scope: Deactivated successfully.
Dec 13 02:13:58 np0005558317 ceph-mgr[80033]: mgr[py] Loading python module 'devicehealth'
Dec 13 02:13:58 np0005558317 ceph-mgr[80033]: mgr[py] Loading python module 'diskprediction_local'
Dec 13 02:13:58 np0005558317 python3[80604]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:13:58 np0005558317 podman[80616]: 2025-12-13 07:13:58.509722113 +0000 UTC m=+0.032863075 container create 5cda0c1ae7f6e2f94c98ce915c84cc83adf22f92de6102ffc9d7e2fef9dad28a (image=quay.io/ceph/ceph:v20, name=recursing_pike, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 02:13:58 np0005558317 systemd[1]: Started libpod-conmon-5cda0c1ae7f6e2f94c98ce915c84cc83adf22f92de6102ffc9d7e2fef9dad28a.scope.
Dec 13 02:13:58 np0005558317 ansible-async_wrapper.py[78879]: Done in kid B.
Dec 13 02:13:58 np0005558317 podman[80629]: 2025-12-13 07:13:58.544843845 +0000 UTC m=+0.042786754 container create f4c136b0029af81a818bd52422fe1d68fb06ffe5310d4d3a3cbaa514dd7fb225 (image=quay.io/ceph/ceph:v20, name=silly_feistel, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:13:58 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:58 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0a3906650d1ec75864d39c9b1cef424ec1fd747013068aab9f8adf08e807e7b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:58 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0a3906650d1ec75864d39c9b1cef424ec1fd747013068aab9f8adf08e807e7b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:58 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0a3906650d1ec75864d39c9b1cef424ec1fd747013068aab9f8adf08e807e7b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 02:13:58 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-ndpimg[80029]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 13 02:13:58 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-ndpimg[80029]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 13 02:13:58 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-ndpimg[80029]:  from numpy import show_config as show_numpy_config
Dec 13 02:13:58 np0005558317 ceph-mgr[80033]: mgr[py] Loading python module 'influx'
Dec 13 02:13:58 np0005558317 podman[80616]: 2025-12-13 07:13:58.561535566 +0000 UTC m=+0.084676548 container init 5cda0c1ae7f6e2f94c98ce915c84cc83adf22f92de6102ffc9d7e2fef9dad28a (image=quay.io/ceph/ceph:v20, name=recursing_pike, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:13:58 np0005558317 podman[80616]: 2025-12-13 07:13:58.566347403 +0000 UTC m=+0.089488365 container start 5cda0c1ae7f6e2f94c98ce915c84cc83adf22f92de6102ffc9d7e2fef9dad28a (image=quay.io/ceph/ceph:v20, name=recursing_pike, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 02:13:58 np0005558317 podman[80616]: 2025-12-13 07:13:58.567380564 +0000 UTC m=+0.090521527 container attach 5cda0c1ae7f6e2f94c98ce915c84cc83adf22f92de6102ffc9d7e2fef9dad28a (image=quay.io/ceph/ceph:v20, name=recursing_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 02:13:58 np0005558317 podman[80616]: 2025-12-13 07:13:58.497307663 +0000 UTC m=+0.020448646 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:58 np0005558317 systemd[1]: Started libpod-conmon-f4c136b0029af81a818bd52422fe1d68fb06ffe5310d4d3a3cbaa514dd7fb225.scope.
Dec 13 02:13:58 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:13:58 np0005558317 podman[80629]: 2025-12-13 07:13:58.620796991 +0000 UTC m=+0.118739911 container init f4c136b0029af81a818bd52422fe1d68fb06ffe5310d4d3a3cbaa514dd7fb225 (image=quay.io/ceph/ceph:v20, name=silly_feistel, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:13:58 np0005558317 ceph-mgr[80033]: mgr[py] Loading python module 'insights'
Dec 13 02:13:58 np0005558317 podman[80629]: 2025-12-13 07:13:58.625063523 +0000 UTC m=+0.123006422 container start f4c136b0029af81a818bd52422fe1d68fb06ffe5310d4d3a3cbaa514dd7fb225 (image=quay.io/ceph/ceph:v20, name=silly_feistel, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:13:58 np0005558317 podman[80629]: 2025-12-13 07:13:58.529517641 +0000 UTC m=+0.027460550 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:13:58 np0005558317 silly_feistel[80649]: 167 167
Dec 13 02:13:58 np0005558317 systemd[1]: libpod-f4c136b0029af81a818bd52422fe1d68fb06ffe5310d4d3a3cbaa514dd7fb225.scope: Deactivated successfully.
Dec 13 02:13:58 np0005558317 conmon[80649]: conmon f4c136b0029af81a818b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f4c136b0029af81a818bd52422fe1d68fb06ffe5310d4d3a3cbaa514dd7fb225.scope/container/memory.events
Dec 13 02:13:58 np0005558317 podman[80629]: 2025-12-13 07:13:58.628884216 +0000 UTC m=+0.126827115 container attach f4c136b0029af81a818bd52422fe1d68fb06ffe5310d4d3a3cbaa514dd7fb225 (image=quay.io/ceph/ceph:v20, name=silly_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:13:58 np0005558317 podman[80629]: 2025-12-13 07:13:58.629303584 +0000 UTC m=+0.127246483 container died f4c136b0029af81a818bd52422fe1d68fb06ffe5310d4d3a3cbaa514dd7fb225 (image=quay.io/ceph/ceph:v20, name=silly_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:13:58 np0005558317 podman[80629]: 2025-12-13 07:13:58.648343832 +0000 UTC m=+0.146286730 container remove f4c136b0029af81a818bd52422fe1d68fb06ffe5310d4d3a3cbaa514dd7fb225 (image=quay.io/ceph/ceph:v20, name=silly_feistel, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 02:13:58 np0005558317 systemd[1]: libpod-conmon-f4c136b0029af81a818bd52422fe1d68fb06ffe5310d4d3a3cbaa514dd7fb225.scope: Deactivated successfully.
Dec 13 02:13:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:13:58 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:13:58 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:58 np0005558317 ceph-mgr[80033]: mgr[py] Loading python module 'iostat'
Dec 13 02:13:58 np0005558317 ceph-mgr[80033]: mgr[py] Loading python module 'k8sevents'
Dec 13 02:13:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Dec 13 02:13:58 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3278627661' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Dec 13 02:13:59 np0005558317 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:13:59 np0005558317 ceph-mgr[75200]: [progress INFO root] Writing back 2 completed events
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:59 np0005558317 systemd[1]: var-lib-containers-storage-overlay-9cb4f343d2aab6bcf7e76a7b26923f3959e1325d18da72b9952f6b0ca12c0650-merged.mount: Deactivated successfully.
Dec 13 02:13:59 np0005558317 podman[80769]: 2025-12-13 07:13:59.08555115 +0000 UTC m=+0.038396778 container exec 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:13:59 np0005558317 ceph-mgr[80033]: mgr[py] Loading python module 'localpool'
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: Reconfiguring daemon mon.compute-0 on compute-0
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/1381470394' entity='client.admin' 
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: Reconfiguring mgr.compute-0.qsherl (unknown last config time)...
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.qsherl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: Reconfiguring daemon mgr.compute-0.qsherl on compute-0
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/3278627661' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:59 np0005558317 podman[80769]: 2025-12-13 07:13:59.161722979 +0000 UTC m=+0.114568586 container exec_died 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:13:59 np0005558317 ceph-mgr[80033]: mgr[py] Loading python module 'mds_autoscaler'
Dec 13 02:13:59 np0005558317 ceph-mgr[80033]: mgr[py] Loading python module 'mirroring'
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:13:59 np0005558317 ceph-mgr[80033]: mgr[py] Loading python module 'nfs'
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3278627661' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec 13 02:13:59 np0005558317 recursing_pike[80643]: set require_min_compat_client to mimic
Dec 13 02:13:59 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec 13 02:13:59 np0005558317 systemd[1]: libpod-5cda0c1ae7f6e2f94c98ce915c84cc83adf22f92de6102ffc9d7e2fef9dad28a.scope: Deactivated successfully.
Dec 13 02:13:59 np0005558317 podman[80616]: 2025-12-13 07:13:59.708776426 +0000 UTC m=+1.231917388 container died 5cda0c1ae7f6e2f94c98ce915c84cc83adf22f92de6102ffc9d7e2fef9dad28a (image=quay.io/ceph/ceph:v20, name=recursing_pike, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 02:13:59 np0005558317 ceph-mgr[80033]: mgr[py] Loading python module 'orchestrator'
Dec 13 02:13:59 np0005558317 systemd[1]: var-lib-containers-storage-overlay-a0a3906650d1ec75864d39c9b1cef424ec1fd747013068aab9f8adf08e807e7b-merged.mount: Deactivated successfully.
Dec 13 02:13:59 np0005558317 podman[80616]: 2025-12-13 07:13:59.735932211 +0000 UTC m=+1.259073173 container remove 5cda0c1ae7f6e2f94c98ce915c84cc83adf22f92de6102ffc9d7e2fef9dad28a (image=quay.io/ceph/ceph:v20, name=recursing_pike, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:13:59 np0005558317 systemd[1]: libpod-conmon-5cda0c1ae7f6e2f94c98ce915c84cc83adf22f92de6102ffc9d7e2fef9dad28a.scope: Deactivated successfully.
Dec 13 02:13:59 np0005558317 ceph-mgr[80033]: mgr[py] Loading python module 'osd_perf_query'
Dec 13 02:13:59 np0005558317 ceph-mgr[80033]: mgr[py] Loading python module 'osd_support'
Dec 13 02:14:00 np0005558317 ceph-mgr[80033]: mgr[py] Loading python module 'pg_autoscaler'
Dec 13 02:14:00 np0005558317 ceph-mgr[80033]: mgr[py] Loading python module 'progress'
Dec 13 02:14:00 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:14:00 np0005558317 python3[80921]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:00 np0005558317 ceph-mgr[80033]: mgr[py] Loading python module 'prometheus'
Dec 13 02:14:00 np0005558317 podman[80922]: 2025-12-13 07:14:00.230689076 +0000 UTC m=+0.024612474 container create 12f15b80f3a73e6a8d1bf26a164fe36710a94286224134ca044a455385aa6a7d (image=quay.io/ceph/ceph:v20, name=silly_matsumoto, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:00 np0005558317 systemd[1]: Started libpod-conmon-12f15b80f3a73e6a8d1bf26a164fe36710a94286224134ca044a455385aa6a7d.scope.
Dec 13 02:14:00 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:00 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9ddf781c96ad350072930a964ecf3ca0f99756469cf22794dc35497e877548/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:00 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9ddf781c96ad350072930a964ecf3ca0f99756469cf22794dc35497e877548/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:00 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9ddf781c96ad350072930a964ecf3ca0f99756469cf22794dc35497e877548/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:00 np0005558317 podman[80922]: 2025-12-13 07:14:00.289273359 +0000 UTC m=+0.083196756 container init 12f15b80f3a73e6a8d1bf26a164fe36710a94286224134ca044a455385aa6a7d (image=quay.io/ceph/ceph:v20, name=silly_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 02:14:00 np0005558317 podman[80922]: 2025-12-13 07:14:00.293486901 +0000 UTC m=+0.087410298 container start 12f15b80f3a73e6a8d1bf26a164fe36710a94286224134ca044a455385aa6a7d (image=quay.io/ceph/ceph:v20, name=silly_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:00 np0005558317 podman[80922]: 2025-12-13 07:14:00.294569796 +0000 UTC m=+0.088493194 container attach 12f15b80f3a73e6a8d1bf26a164fe36710a94286224134ca044a455385aa6a7d (image=quay.io/ceph/ceph:v20, name=silly_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:00 np0005558317 podman[80922]: 2025-12-13 07:14:00.220821144 +0000 UTC m=+0.014744561 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/3278627661' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 13 02:14:00 np0005558317 ceph-mgr[80033]: mgr[py] Loading python module 'rbd_support'
Dec 13 02:14:00 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:14:00 np0005558317 ceph-mgr[80033]: mgr[py] Loading python module 'rgw'
Dec 13 02:14:00 np0005558317 ceph-mgr[80033]: mgr[py] Loading python module 'rook'
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:00 np0005558317 ceph-mgr[75200]: [cephadm INFO root] Added host compute-0
Dec 13 02:14:00 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 13 02:14:00 np0005558317 ceph-mgr[75200]: [cephadm INFO root] Saving service mon spec with placement compute-0
Dec 13 02:14:00 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:14:00 np0005558317 ceph-mgr[75200]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Dec 13 02:14:00 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 13 02:14:00 np0005558317 ceph-mgr[75200]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec 13 02:14:00 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec 13 02:14:00 np0005558317 ceph-mgr[75200]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Dec 13 02:14:00 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:00 np0005558317 ceph-mgr[75200]: [progress INFO root] update: starting ev 54b3d2bd-10ea-49e0-a55f-d5b0fd1d6086 (Updating mgr deployment (-1 -> 1))
Dec 13 02:14:00 np0005558317 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.ndpimg from compute-0 -- ports [8765]
Dec 13 02:14:00 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.ndpimg from compute-0 -- ports [8765]
Dec 13 02:14:00 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:00 np0005558317 silly_matsumoto[80934]: Added host 'compute-0' with addr '192.168.122.100'
Dec 13 02:14:00 np0005558317 silly_matsumoto[80934]: Scheduled mon update...
Dec 13 02:14:00 np0005558317 silly_matsumoto[80934]: Scheduled mgr update...
Dec 13 02:14:00 np0005558317 silly_matsumoto[80934]: Scheduled osd.default_drive_group update...
Dec 13 02:14:00 np0005558317 systemd[1]: libpod-12f15b80f3a73e6a8d1bf26a164fe36710a94286224134ca044a455385aa6a7d.scope: Deactivated successfully.
Dec 13 02:14:00 np0005558317 podman[80922]: 2025-12-13 07:14:00.972764699 +0000 UTC m=+0.766688095 container died 12f15b80f3a73e6a8d1bf26a164fe36710a94286224134ca044a455385aa6a7d (image=quay.io/ceph/ceph:v20, name=silly_matsumoto, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 02:14:00 np0005558317 systemd[1]: var-lib-containers-storage-overlay-1c9ddf781c96ad350072930a964ecf3ca0f99756469cf22794dc35497e877548-merged.mount: Deactivated successfully.
Dec 13 02:14:00 np0005558317 podman[80922]: 2025-12-13 07:14:00.999326507 +0000 UTC m=+0.793249904 container remove 12f15b80f3a73e6a8d1bf26a164fe36710a94286224134ca044a455385aa6a7d (image=quay.io/ceph/ceph:v20, name=silly_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:01 np0005558317 systemd[1]: libpod-conmon-12f15b80f3a73e6a8d1bf26a164fe36710a94286224134ca044a455385aa6a7d.scope: Deactivated successfully.
Dec 13 02:14:01 np0005558317 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:14:01 np0005558317 systemd[1]: Stopping Ceph mgr.compute-0.ndpimg for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de...
Dec 13 02:14:01 np0005558317 python3[81113]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:01 np0005558317 ceph-mgr[80033]: mgr[py] Loading python module 'selftest'
Dec 13 02:14:01 np0005558317 podman[81139]: 2025-12-13 07:14:01.36161339 +0000 UTC m=+0.039154274 container create 77939f5abbbbc27181e2af2f69b7a74bf65c28931e3a8bde325718e81b0eec80 (image=quay.io/ceph/ceph:v20, name=hopeful_raman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:01 np0005558317 systemd[1]: Started libpod-conmon-77939f5abbbbc27181e2af2f69b7a74bf65c28931e3a8bde325718e81b0eec80.scope.
Dec 13 02:14:01 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:01 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a5f9e57a29c3839a3d559ac9a164750511dd64b550251b50d71275cb58526f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:01 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a5f9e57a29c3839a3d559ac9a164750511dd64b550251b50d71275cb58526f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:01 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a5f9e57a29c3839a3d559ac9a164750511dd64b550251b50d71275cb58526f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:01 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:01 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:01 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:01 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:01 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:14:01 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:01 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:01 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:01 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:01 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:01 np0005558317 podman[81139]: 2025-12-13 07:14:01.423713893 +0000 UTC m=+0.101254796 container init 77939f5abbbbc27181e2af2f69b7a74bf65c28931e3a8bde325718e81b0eec80 (image=quay.io/ceph/ceph:v20, name=hopeful_raman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True)
Dec 13 02:14:01 np0005558317 podman[81156]: 2025-12-13 07:14:01.423739711 +0000 UTC m=+0.073527427 container died 0ff35a5463d3bbc7d852b4c61818e209fc4953caeb52224a5f4e7570eb1a0d88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-ndpimg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:01 np0005558317 podman[81139]: 2025-12-13 07:14:01.430513166 +0000 UTC m=+0.108054059 container start 77939f5abbbbc27181e2af2f69b7a74bf65c28931e3a8bde325718e81b0eec80 (image=quay.io/ceph/ceph:v20, name=hopeful_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 02:14:01 np0005558317 podman[81139]: 2025-12-13 07:14:01.431725466 +0000 UTC m=+0.109266349 container attach 77939f5abbbbc27181e2af2f69b7a74bf65c28931e3a8bde325718e81b0eec80 (image=quay.io/ceph/ceph:v20, name=hopeful_raman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 02:14:01 np0005558317 systemd[1]: var-lib-containers-storage-overlay-c9801f212c1a44328b5743c56063ebc515c709dd2f1586ea86202bedf61a16de-merged.mount: Deactivated successfully.
Dec 13 02:14:01 np0005558317 podman[81139]: 2025-12-13 07:14:01.349822874 +0000 UTC m=+0.027363756 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:01 np0005558317 podman[81156]: 2025-12-13 07:14:01.455851174 +0000 UTC m=+0.105638889 container remove 0ff35a5463d3bbc7d852b4c61818e209fc4953caeb52224a5f4e7570eb1a0d88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-ndpimg, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 02:14:01 np0005558317 bash[81156]: ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-ndpimg
Dec 13 02:14:01 np0005558317 systemd[1]: ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de@mgr.compute-0.ndpimg.service: Main process exited, code=exited, status=143/n/a
Dec 13 02:14:01 np0005558317 systemd[1]: ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de@mgr.compute-0.ndpimg.service: Failed with result 'exit-code'.
Dec 13 02:14:01 np0005558317 systemd[1]: Stopped Ceph mgr.compute-0.ndpimg for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 02:14:01 np0005558317 systemd[1]: ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de@mgr.compute-0.ndpimg.service: Consumed 5.033s CPU time, 374.6M memory peak, read 0B from disk, written 131.5K to disk.
Dec 13 02:14:01 np0005558317 systemd[1]: Reloading.
Dec 13 02:14:01 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:14:01 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:14:01 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 13 02:14:01 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4180421055' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 13 02:14:01 np0005558317 hopeful_raman[81169]: 
Dec 13 02:14:01 np0005558317 hopeful_raman[81169]: {"fsid":"00fdae1b-7fad-5f1b-8734-ba4d9298a6de","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":39,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-13T07:13:21:319345+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-13T07:13:21.320643+0000","services":{}},"progress_events":{}}
Dec 13 02:14:01 np0005558317 systemd[1]: libpod-77939f5abbbbc27181e2af2f69b7a74bf65c28931e3a8bde325718e81b0eec80.scope: Deactivated successfully.
Dec 13 02:14:01 np0005558317 podman[81139]: 2025-12-13 07:14:01.837166462 +0000 UTC m=+0.514707345 container died 77939f5abbbbc27181e2af2f69b7a74bf65c28931e3a8bde325718e81b0eec80 (image=quay.io/ceph/ceph:v20, name=hopeful_raman, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 02:14:01 np0005558317 ceph-mgr[75200]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.ndpimg
Dec 13 02:14:01 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.ndpimg
Dec 13 02:14:01 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.ndpimg"} v 0)
Dec 13 02:14:01 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.ndpimg"} : dispatch
Dec 13 02:14:01 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.ndpimg"}]': finished
Dec 13 02:14:01 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 13 02:14:01 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:01 np0005558317 ceph-mgr[75200]: [progress INFO root] complete: finished ev 54b3d2bd-10ea-49e0-a55f-d5b0fd1d6086 (Updating mgr deployment (-1 -> 1))
Dec 13 02:14:01 np0005558317 ceph-mgr[75200]: [progress INFO root] Completed event 54b3d2bd-10ea-49e0-a55f-d5b0fd1d6086 (Updating mgr deployment (-1 -> 1)) in 1 seconds
Dec 13 02:14:01 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 13 02:14:01 np0005558317 systemd[1]: var-lib-containers-storage-overlay-e2a5f9e57a29c3839a3d559ac9a164750511dd64b550251b50d71275cb58526f-merged.mount: Deactivated successfully.
Dec 13 02:14:01 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:01 np0005558317 podman[81139]: 2025-12-13 07:14:01.860908589 +0000 UTC m=+0.538449471 container remove 77939f5abbbbc27181e2af2f69b7a74bf65c28931e3a8bde325718e81b0eec80 (image=quay.io/ceph/ceph:v20, name=hopeful_raman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 02:14:01 np0005558317 systemd[1]: libpod-conmon-77939f5abbbbc27181e2af2f69b7a74bf65c28931e3a8bde325718e81b0eec80.scope: Deactivated successfully.
Dec 13 02:14:02 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:14:02 np0005558317 podman[81388]: 2025-12-13 07:14:02.288042569 +0000 UTC m=+0.040851474 container exec 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: Added host compute-0
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: Saving service mon spec with placement compute-0
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: Saving service mgr spec with placement compute-0
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: Marking host: compute-0 for OSDSpec preview refresh.
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: Saving service osd.default_drive_group spec with placement compute-0
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: Removing daemon mgr.compute-0.ndpimg from compute-0 -- ports [8765]
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.ndpimg"} : dispatch
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.ndpimg"}]': finished
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:02 np0005558317 podman[81405]: 2025-12-13 07:14:02.421531618 +0000 UTC m=+0.047827931 container exec_died 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec 13 02:14:02 np0005558317 podman[81388]: 2025-12-13 07:14:02.424155792 +0000 UTC m=+0.176964687 container exec_died 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:14:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:14:02 np0005558317 podman[81525]: 2025-12-13 07:14:02.960649325 +0000 UTC m=+0.024243169 container create ec7d7ab89c7c24970d74db391eb1f4c0c53bb288617eb18a721396f1feb9b877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_khayyam, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:02 np0005558317 systemd[1]: Started libpod-conmon-ec7d7ab89c7c24970d74db391eb1f4c0c53bb288617eb18a721396f1feb9b877.scope.
Dec 13 02:14:03 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:03 np0005558317 podman[81525]: 2025-12-13 07:14:03.011114694 +0000 UTC m=+0.074708548 container init ec7d7ab89c7c24970d74db391eb1f4c0c53bb288617eb18a721396f1feb9b877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_khayyam, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:03 np0005558317 podman[81525]: 2025-12-13 07:14:03.015996322 +0000 UTC m=+0.079590156 container start ec7d7ab89c7c24970d74db391eb1f4c0c53bb288617eb18a721396f1feb9b877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_khayyam, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 02:14:03 np0005558317 podman[81525]: 2025-12-13 07:14:03.017117509 +0000 UTC m=+0.080711344 container attach ec7d7ab89c7c24970d74db391eb1f4c0c53bb288617eb18a721396f1feb9b877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_khayyam, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 02:14:03 np0005558317 serene_khayyam[81538]: 167 167
Dec 13 02:14:03 np0005558317 systemd[1]: libpod-ec7d7ab89c7c24970d74db391eb1f4c0c53bb288617eb18a721396f1feb9b877.scope: Deactivated successfully.
Dec 13 02:14:03 np0005558317 podman[81525]: 2025-12-13 07:14:03.020035345 +0000 UTC m=+0.083629178 container died ec7d7ab89c7c24970d74db391eb1f4c0c53bb288617eb18a721396f1feb9b877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 02:14:03 np0005558317 systemd[1]: var-lib-containers-storage-overlay-5d6d1fd0db588dec5ed01f4097df45d6638dbad364018b26e4f5c83afc564be0-merged.mount: Deactivated successfully.
Dec 13 02:14:03 np0005558317 podman[81525]: 2025-12-13 07:14:03.035522362 +0000 UTC m=+0.099116196 container remove ec7d7ab89c7c24970d74db391eb1f4c0c53bb288617eb18a721396f1feb9b877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_khayyam, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:03 np0005558317 podman[81525]: 2025-12-13 07:14:02.950756747 +0000 UTC m=+0.014350601 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:03 np0005558317 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:14:03 np0005558317 systemd[1]: libpod-conmon-ec7d7ab89c7c24970d74db391eb1f4c0c53bb288617eb18a721396f1feb9b877.scope: Deactivated successfully.
Dec 13 02:14:03 np0005558317 podman[81560]: 2025-12-13 07:14:03.145269453 +0000 UTC m=+0.027094630 container create 9780f177353a85440b02d3f788d54f4d0955d87514e28948cf5ded454189efaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 02:14:03 np0005558317 systemd[1]: Started libpod-conmon-9780f177353a85440b02d3f788d54f4d0955d87514e28948cf5ded454189efaa.scope.
Dec 13 02:14:03 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:03 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ec84a22bda216e97086a54e8d9d2e81f1fbcbc41380e0c30a7e78d1d622a88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:03 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ec84a22bda216e97086a54e8d9d2e81f1fbcbc41380e0c30a7e78d1d622a88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:03 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ec84a22bda216e97086a54e8d9d2e81f1fbcbc41380e0c30a7e78d1d622a88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:03 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ec84a22bda216e97086a54e8d9d2e81f1fbcbc41380e0c30a7e78d1d622a88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:03 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ec84a22bda216e97086a54e8d9d2e81f1fbcbc41380e0c30a7e78d1d622a88/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:03 np0005558317 podman[81560]: 2025-12-13 07:14:03.211320715 +0000 UTC m=+0.093145901 container init 9780f177353a85440b02d3f788d54f4d0955d87514e28948cf5ded454189efaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hofstadter, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 02:14:03 np0005558317 podman[81560]: 2025-12-13 07:14:03.216777715 +0000 UTC m=+0.098602881 container start 9780f177353a85440b02d3f788d54f4d0955d87514e28948cf5ded454189efaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 02:14:03 np0005558317 podman[81560]: 2025-12-13 07:14:03.218095452 +0000 UTC m=+0.099920639 container attach 9780f177353a85440b02d3f788d54f4d0955d87514e28948cf5ded454189efaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hofstadter, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:03 np0005558317 podman[81560]: 2025-12-13 07:14:03.134849162 +0000 UTC m=+0.016674340 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:03 np0005558317 ceph-mon[74928]: Removing key for mgr.compute-0.ndpimg
Dec 13 02:14:03 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:03 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:03 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:03 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:03 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:14:03 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:03 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:14:03 np0005558317 angry_hofstadter[81573]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:14:03 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:03 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:03 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 82d490c1-ea27-486f-9cfe-f392b9710718
Dec 13 02:14:04 np0005558317 ceph-mgr[75200]: [progress INFO root] Writing back 3 completed events
Dec 13 02:14:04 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 13 02:14:04 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:04 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "82d490c1-ea27-486f-9cfe-f392b9710718"} v 0)
Dec 13 02:14:04 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3552933581' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "82d490c1-ea27-486f-9cfe-f392b9710718"} : dispatch
Dec 13 02:14:04 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec 13 02:14:04 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 02:14:04 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3552933581' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "82d490c1-ea27-486f-9cfe-f392b9710718"}]': finished
Dec 13 02:14:04 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec 13 02:14:04 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec 13 02:14:04 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:14:04 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:14:04 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:14:04 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Dec 13 02:14:04 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec 13 02:14:04 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:14:04 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 13 02:14:04 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 13 02:14:04 np0005558317 lvm[81665]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:14:04 np0005558317 lvm[81665]: VG ceph_vg0 finished
Dec 13 02:14:04 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Dec 13 02:14:04 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 13 02:14:04 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1455557760' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 13 02:14:04 np0005558317 angry_hofstadter[81573]: stderr: got monmap epoch 1
Dec 13 02:14:04 np0005558317 angry_hofstadter[81573]: --> Creating keyring file for osd.0
Dec 13 02:14:04 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Dec 13 02:14:04 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Dec 13 02:14:04 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 82d490c1-ea27-486f-9cfe-f392b9710718 --setuser ceph --setgroup ceph
Dec 13 02:14:05 np0005558317 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:14:05 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:05 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/3552933581' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "82d490c1-ea27-486f-9cfe-f392b9710718"} : dispatch
Dec 13 02:14:05 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/3552933581' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "82d490c1-ea27-486f-9cfe-f392b9710718"}]': finished
Dec 13 02:14:05 np0005558317 ceph-mon[74928]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 13 02:14:05 np0005558317 ceph-mon[74928]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 13 02:14:05 np0005558317 angry_hofstadter[81573]: stderr: 2025-12-13T07:14:04.665+0000 7f04e77ea8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Dec 13 02:14:05 np0005558317 angry_hofstadter[81573]: stderr: 2025-12-13T07:14:04.684+0000 7f04e77ea8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Dec 13 02:14:05 np0005558317 angry_hofstadter[81573]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec 13 02:14:05 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 13 02:14:05 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 13 02:14:05 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 13 02:14:05 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 13 02:14:05 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 13 02:14:05 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 13 02:14:05 np0005558317 angry_hofstadter[81573]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 13 02:14:05 np0005558317 angry_hofstadter[81573]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec 13 02:14:05 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:05 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:05 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf
Dec 13 02:14:05 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf"} v 0)
Dec 13 02:14:05 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3438202900' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf"} : dispatch
Dec 13 02:14:05 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec 13 02:14:05 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 02:14:05 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3438202900' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf"}]': finished
Dec 13 02:14:05 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec 13 02:14:05 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec 13 02:14:05 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:14:05 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:14:05 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:14:05 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:14:05 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:14:05 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 02:14:05 np0005558317 lvm[82606]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:14:05 np0005558317 lvm[82606]: VG ceph_vg1 finished
Dec 13 02:14:05 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec 13 02:14:05 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Dec 13 02:14:05 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 13 02:14:05 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 13 02:14:05 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec 13 02:14:06 np0005558317 ceph-mon[74928]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 13 02:14:06 np0005558317 ceph-mon[74928]: Cluster is now healthy
Dec 13 02:14:06 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/3438202900' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf"} : dispatch
Dec 13 02:14:06 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/3438202900' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf"}]': finished
Dec 13 02:14:06 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 13 02:14:06 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/412080431' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 13 02:14:06 np0005558317 angry_hofstadter[81573]: stderr: got monmap epoch 1
Dec 13 02:14:06 np0005558317 angry_hofstadter[81573]: --> Creating keyring file for osd.1
Dec 13 02:14:06 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec 13 02:14:06 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec 13 02:14:06 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf --setuser ceph --setgroup ceph
Dec 13 02:14:06 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:14:06 np0005558317 angry_hofstadter[81573]: stderr: 2025-12-13T07:14:06.205+0000 7fa9dcd998c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Dec 13 02:14:06 np0005558317 angry_hofstadter[81573]: stderr: 2025-12-13T07:14:06.223+0000 7fa9dcd998c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec 13 02:14:06 np0005558317 angry_hofstadter[81573]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Dec 13 02:14:06 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 13 02:14:06 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 13 02:14:06 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 13 02:14:06 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 13 02:14:06 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 13 02:14:06 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 13 02:14:06 np0005558317 angry_hofstadter[81573]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 13 02:14:06 np0005558317 angry_hofstadter[81573]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Dec 13 02:14:06 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:06 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:06 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b927bbdd-6a1c-42b3-b097-3003acae4885
Dec 13 02:14:07 np0005558317 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:14:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "b927bbdd-6a1c-42b3-b097-3003acae4885"} v 0)
Dec 13 02:14:07 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2213894842' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "b927bbdd-6a1c-42b3-b097-3003acae4885"} : dispatch
Dec 13 02:14:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec 13 02:14:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 02:14:07 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2213894842' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b927bbdd-6a1c-42b3-b097-3003acae4885"}]': finished
Dec 13 02:14:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Dec 13 02:14:07 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Dec 13 02:14:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:14:07 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:14:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:14:07 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:14:07 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:14:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:14:07 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:14:07 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 02:14:07 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:14:07 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Dec 13 02:14:07 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Dec 13 02:14:07 np0005558317 lvm[83546]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:14:07 np0005558317 lvm[83546]: VG ceph_vg2 finished
Dec 13 02:14:07 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 13 02:14:07 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 13 02:14:07 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Dec 13 02:14:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 13 02:14:07 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3714765293' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 13 02:14:07 np0005558317 angry_hofstadter[81573]: stderr: got monmap epoch 1
Dec 13 02:14:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:14:07 np0005558317 angry_hofstadter[81573]: --> Creating keyring file for osd.2
Dec 13 02:14:07 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Dec 13 02:14:07 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Dec 13 02:14:07 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid b927bbdd-6a1c-42b3-b097-3003acae4885 --setuser ceph --setgroup ceph
Dec 13 02:14:08 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/2213894842' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "b927bbdd-6a1c-42b3-b097-3003acae4885"} : dispatch
Dec 13 02:14:08 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/2213894842' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b927bbdd-6a1c-42b3-b097-3003acae4885"}]': finished
Dec 13 02:14:08 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:14:08 np0005558317 angry_hofstadter[81573]: stderr: 2025-12-13T07:14:07.759+0000 7f2f1ba098c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Dec 13 02:14:08 np0005558317 angry_hofstadter[81573]: stderr: 2025-12-13T07:14:07.777+0000 7f2f1ba098c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Dec 13 02:14:08 np0005558317 angry_hofstadter[81573]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Dec 13 02:14:08 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 13 02:14:08 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec 13 02:14:08 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 13 02:14:08 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec 13 02:14:08 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 13 02:14:08 np0005558317 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 13 02:14:08 np0005558317 angry_hofstadter[81573]: --> ceph-volume lvm activate successful for osd ID: 2
Dec 13 02:14:08 np0005558317 angry_hofstadter[81573]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Dec 13 02:14:08 np0005558317 systemd[1]: libpod-9780f177353a85440b02d3f788d54f4d0955d87514e28948cf5ded454189efaa.scope: Deactivated successfully.
Dec 13 02:14:08 np0005558317 systemd[1]: libpod-9780f177353a85440b02d3f788d54f4d0955d87514e28948cf5ded454189efaa.scope: Consumed 4.197s CPU time.
Dec 13 02:14:08 np0005558317 conmon[81573]: conmon 9780f177353a85440b02 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9780f177353a85440b02d3f788d54f4d0955d87514e28948cf5ded454189efaa.scope/container/memory.events
Dec 13 02:14:08 np0005558317 podman[84454]: 2025-12-13 07:14:08.456470357 +0000 UTC m=+0.015309914 container died 9780f177353a85440b02d3f788d54f4d0955d87514e28948cf5ded454189efaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 02:14:08 np0005558317 systemd[1]: var-lib-containers-storage-overlay-c2ec84a22bda216e97086a54e8d9d2e81f1fbcbc41380e0c30a7e78d1d622a88-merged.mount: Deactivated successfully.
Dec 13 02:14:08 np0005558317 podman[84454]: 2025-12-13 07:14:08.477640879 +0000 UTC m=+0.036480436 container remove 9780f177353a85440b02d3f788d54f4d0955d87514e28948cf5ded454189efaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 02:14:08 np0005558317 systemd[1]: libpod-conmon-9780f177353a85440b02d3f788d54f4d0955d87514e28948cf5ded454189efaa.scope: Deactivated successfully.
Dec 13 02:14:08 np0005558317 podman[84526]: 2025-12-13 07:14:08.814408095 +0000 UTC m=+0.029356023 container create 91303fa0c019e71baf2750f8261ccf659251e09aec7d95d19feb93c318b9ea9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 02:14:08 np0005558317 systemd[1]: Started libpod-conmon-91303fa0c019e71baf2750f8261ccf659251e09aec7d95d19feb93c318b9ea9d.scope.
Dec 13 02:14:08 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:08 np0005558317 podman[84526]: 2025-12-13 07:14:08.871461499 +0000 UTC m=+0.086409426 container init 91303fa0c019e71baf2750f8261ccf659251e09aec7d95d19feb93c318b9ea9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_roentgen, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:14:08 np0005558317 podman[84526]: 2025-12-13 07:14:08.876944427 +0000 UTC m=+0.091892355 container start 91303fa0c019e71baf2750f8261ccf659251e09aec7d95d19feb93c318b9ea9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_roentgen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 02:14:08 np0005558317 podman[84526]: 2025-12-13 07:14:08.878021582 +0000 UTC m=+0.092969510 container attach 91303fa0c019e71baf2750f8261ccf659251e09aec7d95d19feb93c318b9ea9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 02:14:08 np0005558317 thirsty_roentgen[84539]: 167 167
Dec 13 02:14:08 np0005558317 systemd[1]: libpod-91303fa0c019e71baf2750f8261ccf659251e09aec7d95d19feb93c318b9ea9d.scope: Deactivated successfully.
Dec 13 02:14:08 np0005558317 podman[84526]: 2025-12-13 07:14:08.8806798 +0000 UTC m=+0.095627728 container died 91303fa0c019e71baf2750f8261ccf659251e09aec7d95d19feb93c318b9ea9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_roentgen, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 02:14:08 np0005558317 systemd[1]: var-lib-containers-storage-overlay-5f1dfd8a80c80efbc67f0deb4d743e424addacb971d8d21f113b537b421514e0-merged.mount: Deactivated successfully.
Dec 13 02:14:08 np0005558317 podman[84526]: 2025-12-13 07:14:08.898182536 +0000 UTC m=+0.113130464 container remove 91303fa0c019e71baf2750f8261ccf659251e09aec7d95d19feb93c318b9ea9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_roentgen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:08 np0005558317 podman[84526]: 2025-12-13 07:14:08.803055341 +0000 UTC m=+0.018003269 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:08 np0005558317 systemd[1]: libpod-conmon-91303fa0c019e71baf2750f8261ccf659251e09aec7d95d19feb93c318b9ea9d.scope: Deactivated successfully.
Dec 13 02:14:09 np0005558317 podman[84561]: 2025-12-13 07:14:09.010602873 +0000 UTC m=+0.028553223 container create 17a78f5e2214de9015bd52719697bd7964aa944c2a1ca6a49c042a0345f3be5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_antonelli, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:09 np0005558317 systemd[1]: Started libpod-conmon-17a78f5e2214de9015bd52719697bd7964aa944c2a1ca6a49c042a0345f3be5e.scope.
Dec 13 02:14:09 np0005558317 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:14:09 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:09 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89933ee2d1d79bac39c20f4baa1392aaa32622edbf18e54eca214b6729d46a04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:09 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89933ee2d1d79bac39c20f4baa1392aaa32622edbf18e54eca214b6729d46a04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:09 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89933ee2d1d79bac39c20f4baa1392aaa32622edbf18e54eca214b6729d46a04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:09 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89933ee2d1d79bac39c20f4baa1392aaa32622edbf18e54eca214b6729d46a04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:14:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:14:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:14:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:14:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:14:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:14:09 np0005558317 podman[84561]: 2025-12-13 07:14:09.063494935 +0000 UTC m=+0.081445283 container init 17a78f5e2214de9015bd52719697bd7964aa944c2a1ca6a49c042a0345f3be5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_antonelli, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:09 np0005558317 podman[84561]: 2025-12-13 07:14:09.070326949 +0000 UTC m=+0.088277299 container start 17a78f5e2214de9015bd52719697bd7964aa944c2a1ca6a49c042a0345f3be5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_antonelli, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 02:14:09 np0005558317 podman[84561]: 2025-12-13 07:14:09.07173142 +0000 UTC m=+0.089681768 container attach 17a78f5e2214de9015bd52719697bd7964aa944c2a1ca6a49c042a0345f3be5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_antonelli, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:09 np0005558317 podman[84561]: 2025-12-13 07:14:08.999715225 +0000 UTC m=+0.017665594 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]: {
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:    "0": [
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:        {
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "devices": [
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "/dev/loop3"
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            ],
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "lv_name": "ceph_lv0",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "lv_size": "21470642176",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "name": "ceph_lv0",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "tags": {
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.cluster_name": "ceph",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.crush_device_class": "",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.encrypted": "0",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.objectstore": "bluestore",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.osd_id": "0",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.type": "block",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.vdo": "0",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.with_tpm": "0"
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            },
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "type": "block",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "vg_name": "ceph_vg0"
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:        }
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:    ],
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:    "1": [
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:        {
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "devices": [
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "/dev/loop4"
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            ],
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "lv_name": "ceph_lv1",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "lv_size": "21470642176",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "name": "ceph_lv1",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "tags": {
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.cluster_name": "ceph",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.crush_device_class": "",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.encrypted": "0",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.objectstore": "bluestore",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.osd_id": "1",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.type": "block",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.vdo": "0",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.with_tpm": "0"
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            },
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "type": "block",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "vg_name": "ceph_vg1"
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:        }
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:    ],
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:    "2": [
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:        {
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "devices": [
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "/dev/loop5"
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            ],
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "lv_name": "ceph_lv2",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "lv_size": "21470642176",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "name": "ceph_lv2",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "tags": {
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.cluster_name": "ceph",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.crush_device_class": "",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.encrypted": "0",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.objectstore": "bluestore",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.osd_id": "2",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.type": "block",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.vdo": "0",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:                "ceph.with_tpm": "0"
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            },
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "type": "block",
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:            "vg_name": "ceph_vg2"
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:        }
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]:    ]
Dec 13 02:14:09 np0005558317 magical_antonelli[84574]: }
Dec 13 02:14:09 np0005558317 systemd[1]: libpod-17a78f5e2214de9015bd52719697bd7964aa944c2a1ca6a49c042a0345f3be5e.scope: Deactivated successfully.
Dec 13 02:14:09 np0005558317 podman[84561]: 2025-12-13 07:14:09.311405263 +0000 UTC m=+0.329355612 container died 17a78f5e2214de9015bd52719697bd7964aa944c2a1ca6a49c042a0345f3be5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:09 np0005558317 systemd[1]: var-lib-containers-storage-overlay-89933ee2d1d79bac39c20f4baa1392aaa32622edbf18e54eca214b6729d46a04-merged.mount: Deactivated successfully.
Dec 13 02:14:09 np0005558317 podman[84561]: 2025-12-13 07:14:09.333065536 +0000 UTC m=+0.351015885 container remove 17a78f5e2214de9015bd52719697bd7964aa944c2a1ca6a49c042a0345f3be5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_antonelli, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:09 np0005558317 systemd[1]: libpod-conmon-17a78f5e2214de9015bd52719697bd7964aa944c2a1ca6a49c042a0345f3be5e.scope: Deactivated successfully.
Dec 13 02:14:09 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec 13 02:14:09 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 13 02:14:09 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:14:09 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:14:09 np0005558317 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Dec 13 02:14:09 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Dec 13 02:14:09 np0005558317 podman[84678]: 2025-12-13 07:14:09.748484791 +0000 UTC m=+0.028504250 container create c36ce2110d273970166e855f3451f654a1f70cb958c455b322add2dab8ed3e87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilbur, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:09 np0005558317 systemd[1]: Started libpod-conmon-c36ce2110d273970166e855f3451f654a1f70cb958c455b322add2dab8ed3e87.scope.
Dec 13 02:14:09 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:09 np0005558317 podman[84678]: 2025-12-13 07:14:09.794890066 +0000 UTC m=+0.074909516 container init c36ce2110d273970166e855f3451f654a1f70cb958c455b322add2dab8ed3e87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 02:14:09 np0005558317 podman[84678]: 2025-12-13 07:14:09.799765102 +0000 UTC m=+0.079784551 container start c36ce2110d273970166e855f3451f654a1f70cb958c455b322add2dab8ed3e87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilbur, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:09 np0005558317 podman[84678]: 2025-12-13 07:14:09.800968174 +0000 UTC m=+0.080987623 container attach c36ce2110d273970166e855f3451f654a1f70cb958c455b322add2dab8ed3e87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilbur, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:09 np0005558317 stupefied_wilbur[84691]: 167 167
Dec 13 02:14:09 np0005558317 systemd[1]: libpod-c36ce2110d273970166e855f3451f654a1f70cb958c455b322add2dab8ed3e87.scope: Deactivated successfully.
Dec 13 02:14:09 np0005558317 podman[84696]: 2025-12-13 07:14:09.832306532 +0000 UTC m=+0.015889885 container died c36ce2110d273970166e855f3451f654a1f70cb958c455b322add2dab8ed3e87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilbur, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:09 np0005558317 podman[84678]: 2025-12-13 07:14:09.736918245 +0000 UTC m=+0.016937715 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:09 np0005558317 systemd[1]: var-lib-containers-storage-overlay-b6284b8732a79d90527d5201718ee9a72c37f3defb289445bf5fc2b6fa955c7e-merged.mount: Deactivated successfully.
Dec 13 02:14:09 np0005558317 podman[84696]: 2025-12-13 07:14:09.848399347 +0000 UTC m=+0.031982680 container remove c36ce2110d273970166e855f3451f654a1f70cb958c455b322add2dab8ed3e87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:09 np0005558317 systemd[1]: libpod-conmon-c36ce2110d273970166e855f3451f654a1f70cb958c455b322add2dab8ed3e87.scope: Deactivated successfully.
Dec 13 02:14:10 np0005558317 podman[84720]: 2025-12-13 07:14:10.027486634 +0000 UTC m=+0.030509320 container create f5d8bd1b35042614e0e682dae1d0017aa391f7a3e0518ff4f6e3c75f5f5c571d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 02:14:10 np0005558317 systemd[1]: Started libpod-conmon-f5d8bd1b35042614e0e682dae1d0017aa391f7a3e0518ff4f6e3c75f5f5c571d.scope.
Dec 13 02:14:10 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 13 02:14:10 np0005558317 ceph-mon[74928]: Deploying daemon osd.0 on compute-0
Dec 13 02:14:10 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:10 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d492730f843b57727c7f957f4474ed3cbc6586f638a41b67bf3faa5d80d17d81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:10 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d492730f843b57727c7f957f4474ed3cbc6586f638a41b67bf3faa5d80d17d81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:10 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d492730f843b57727c7f957f4474ed3cbc6586f638a41b67bf3faa5d80d17d81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:10 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d492730f843b57727c7f957f4474ed3cbc6586f638a41b67bf3faa5d80d17d81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:10 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d492730f843b57727c7f957f4474ed3cbc6586f638a41b67bf3faa5d80d17d81/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:10 np0005558317 podman[84720]: 2025-12-13 07:14:10.096761254 +0000 UTC m=+0.099783950 container init f5d8bd1b35042614e0e682dae1d0017aa391f7a3e0518ff4f6e3c75f5f5c571d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate-test, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 02:14:10 np0005558317 podman[84720]: 2025-12-13 07:14:10.102117765 +0000 UTC m=+0.105140442 container start f5d8bd1b35042614e0e682dae1d0017aa391f7a3e0518ff4f6e3c75f5f5c571d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate-test, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 02:14:10 np0005558317 podman[84720]: 2025-12-13 07:14:10.103110662 +0000 UTC m=+0.106133348 container attach f5d8bd1b35042614e0e682dae1d0017aa391f7a3e0518ff4f6e3c75f5f5c571d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 02:14:10 np0005558317 podman[84720]: 2025-12-13 07:14:10.017146113 +0000 UTC m=+0.020168809 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:10 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:14:10 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate-test[84733]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 13 02:14:10 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate-test[84733]:                            [--no-systemd] [--no-tmpfs]
Dec 13 02:14:10 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate-test[84733]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 13 02:14:10 np0005558317 systemd[1]: libpod-f5d8bd1b35042614e0e682dae1d0017aa391f7a3e0518ff4f6e3c75f5f5c571d.scope: Deactivated successfully.
Dec 13 02:14:10 np0005558317 podman[84720]: 2025-12-13 07:14:10.25672094 +0000 UTC m=+0.259743636 container died f5d8bd1b35042614e0e682dae1d0017aa391f7a3e0518ff4f6e3c75f5f5c571d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 02:14:10 np0005558317 systemd[1]: var-lib-containers-storage-overlay-d492730f843b57727c7f957f4474ed3cbc6586f638a41b67bf3faa5d80d17d81-merged.mount: Deactivated successfully.
Dec 13 02:14:10 np0005558317 podman[84720]: 2025-12-13 07:14:10.278914585 +0000 UTC m=+0.281937271 container remove f5d8bd1b35042614e0e682dae1d0017aa391f7a3e0518ff4f6e3c75f5f5c571d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate-test, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 02:14:10 np0005558317 systemd[1]: libpod-conmon-f5d8bd1b35042614e0e682dae1d0017aa391f7a3e0518ff4f6e3c75f5f5c571d.scope: Deactivated successfully.
Dec 13 02:14:10 np0005558317 systemd[1]: Reloading.
Dec 13 02:14:10 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:14:10 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:14:10 np0005558317 systemd[1]: Reloading.
Dec 13 02:14:10 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:14:10 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:14:10 np0005558317 systemd[1]: Starting Ceph osd.0 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de...
Dec 13 02:14:11 np0005558317 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:14:11 np0005558317 podman[84883]: 2025-12-13 07:14:11.05088436 +0000 UTC m=+0.028288976 container create d81c1093e9a8e56fd0a02c3dff4ed9509b79f42968910def74dcb033dde2754e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:11 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:11 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b93e04b2a1b6337312b39fbbc810aef378ac51d3bf18186e66f0c421c05cba0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:11 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b93e04b2a1b6337312b39fbbc810aef378ac51d3bf18186e66f0c421c05cba0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:11 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b93e04b2a1b6337312b39fbbc810aef378ac51d3bf18186e66f0c421c05cba0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:11 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b93e04b2a1b6337312b39fbbc810aef378ac51d3bf18186e66f0c421c05cba0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:11 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b93e04b2a1b6337312b39fbbc810aef378ac51d3bf18186e66f0c421c05cba0c/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:11 np0005558317 podman[84883]: 2025-12-13 07:14:11.104341002 +0000 UTC m=+0.081745618 container init d81c1093e9a8e56fd0a02c3dff4ed9509b79f42968910def74dcb033dde2754e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:11 np0005558317 podman[84883]: 2025-12-13 07:14:11.108909761 +0000 UTC m=+0.086314367 container start d81c1093e9a8e56fd0a02c3dff4ed9509b79f42968910def74dcb033dde2754e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 02:14:11 np0005558317 podman[84883]: 2025-12-13 07:14:11.112409842 +0000 UTC m=+0.089814468 container attach d81c1093e9a8e56fd0a02c3dff4ed9509b79f42968910def74dcb033dde2754e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 02:14:11 np0005558317 podman[84883]: 2025-12-13 07:14:11.039282337 +0000 UTC m=+0.016686964 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:11 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate[84895]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:11 np0005558317 bash[84883]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:11 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate[84895]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:11 np0005558317 bash[84883]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:11 np0005558317 lvm[84979]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:14:11 np0005558317 lvm[84980]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:14:11 np0005558317 lvm[84979]: VG ceph_vg0 finished
Dec 13 02:14:11 np0005558317 lvm[84980]: VG ceph_vg1 finished
Dec 13 02:14:11 np0005558317 lvm[84983]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:14:11 np0005558317 lvm[84983]: VG ceph_vg2 finished
Dec 13 02:14:11 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate[84895]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 13 02:14:11 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate[84895]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:11 np0005558317 bash[84883]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 13 02:14:11 np0005558317 bash[84883]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:11 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate[84895]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:11 np0005558317 bash[84883]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:11 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate[84895]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 13 02:14:11 np0005558317 bash[84883]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 13 02:14:11 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate[84895]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 13 02:14:11 np0005558317 bash[84883]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 13 02:14:11 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate[84895]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 13 02:14:11 np0005558317 bash[84883]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 13 02:14:11 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate[84895]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 13 02:14:11 np0005558317 bash[84883]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 13 02:14:11 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate[84895]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 13 02:14:11 np0005558317 bash[84883]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 13 02:14:11 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate[84895]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 13 02:14:11 np0005558317 bash[84883]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 13 02:14:11 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate[84895]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 13 02:14:11 np0005558317 bash[84883]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 13 02:14:11 np0005558317 systemd[1]: libpod-d81c1093e9a8e56fd0a02c3dff4ed9509b79f42968910def74dcb033dde2754e.scope: Deactivated successfully.
Dec 13 02:14:11 np0005558317 podman[84883]: 2025-12-13 07:14:11.925754796 +0000 UTC m=+0.903159412 container died d81c1093e9a8e56fd0a02c3dff4ed9509b79f42968910def74dcb033dde2754e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 02:14:11 np0005558317 systemd[1]: libpod-d81c1093e9a8e56fd0a02c3dff4ed9509b79f42968910def74dcb033dde2754e.scope: Consumed 1.133s CPU time.
Dec 13 02:14:11 np0005558317 systemd[1]: var-lib-containers-storage-overlay-b93e04b2a1b6337312b39fbbc810aef378ac51d3bf18186e66f0c421c05cba0c-merged.mount: Deactivated successfully.
Dec 13 02:14:11 np0005558317 podman[84883]: 2025-12-13 07:14:11.949347651 +0000 UTC m=+0.926752257 container remove d81c1093e9a8e56fd0a02c3dff4ed9509b79f42968910def74dcb033dde2754e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:12 np0005558317 podman[85124]: 2025-12-13 07:14:12.094803658 +0000 UTC m=+0.028672427 container create 5e169e1385f98bf8a58844e41c31305318f100b9850e1f4defaf308d2b1dfde7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 02:14:12 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89559cb37910374ddd1f527bb0a82bdc91c2d7a0c74c265319fb98ee5101af6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:12 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89559cb37910374ddd1f527bb0a82bdc91c2d7a0c74c265319fb98ee5101af6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:12 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89559cb37910374ddd1f527bb0a82bdc91c2d7a0c74c265319fb98ee5101af6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:12 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89559cb37910374ddd1f527bb0a82bdc91c2d7a0c74c265319fb98ee5101af6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:12 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89559cb37910374ddd1f527bb0a82bdc91c2d7a0c74c265319fb98ee5101af6/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:12 np0005558317 podman[85124]: 2025-12-13 07:14:12.145040517 +0000 UTC m=+0.078909286 container init 5e169e1385f98bf8a58844e41c31305318f100b9850e1f4defaf308d2b1dfde7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 02:14:12 np0005558317 podman[85124]: 2025-12-13 07:14:12.149431763 +0000 UTC m=+0.083300532 container start 5e169e1385f98bf8a58844e41c31305318f100b9850e1f4defaf308d2b1dfde7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 02:14:12 np0005558317 bash[85124]: 5e169e1385f98bf8a58844e41c31305318f100b9850e1f4defaf308d2b1dfde7
Dec 13 02:14:12 np0005558317 podman[85124]: 2025-12-13 07:14:12.083018402 +0000 UTC m=+0.016887171 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:12 np0005558317 systemd[1]: Started Ceph osd.0 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: pidfile_write: ignore empty --pid-file
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:14:12 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:14:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:14:12 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:14:12 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec 13 02:14:12 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 13 02:14:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:14:12 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:14:12 np0005558317 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Dec 13 02:14:12 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: load: jerasure load: lrc 
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d9000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d9000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d9000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d9000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluefs mount
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluefs mount shared_bdev_used = 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: RocksDB version: 7.9.2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Git sha 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: DB SUMMARY
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: DB Session ID:  ALGEVV9HATHAALWVAQ6X
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: CURRENT file:  CURRENT
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                         Options.error_if_exists: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.create_if_missing: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                                     Options.env: 0x557f62281ea0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                                Options.info_log: 0x557f632e88a0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                              Options.statistics: (nil)
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.use_fsync: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                              Options.db_log_dir: 
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                                 Options.wal_dir: db.wal
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.write_buffer_manager: 0x557f622e2b40
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.unordered_write: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.row_cache: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                              Options.wal_filter: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.two_write_queues: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.wal_compression: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.atomic_flush: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.max_background_jobs: 4
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.max_background_compactions: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.max_subcompactions: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.max_open_files: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Compression algorithms supported:
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: #011kZSTD supported: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: #011kXpressCompression supported: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: #011kBZip2Compression supported: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: #011kLZ4Compression supported: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: #011kZlibCompression supported: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: #011kLZ4HCCompression supported: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: #011kSnappyCompression supported: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557f622858d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557f622858d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557f622858d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557f622858d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557f622858d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557f622858d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557f622858d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557f62285a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557f62285a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557f62285a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 255f2825-be90-45a3-bc3a-4eac136bcf1c
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610052436844, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610052438192, "job": 1, "event": "recovery_finished"}
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: freelist init
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: freelist _read_cfg
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluefs umount
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d9000 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d9000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d9000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d9000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bdev(0x557f622d9000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluefs mount
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluefs mount shared_bdev_used = 27262976
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: RocksDB version: 7.9.2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Git sha 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: DB SUMMARY
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: DB Session ID:  ALGEVV9HATHAALWVAQ6W
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: CURRENT file:  CURRENT
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                         Options.error_if_exists: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.create_if_missing: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                                     Options.env: 0x557f634baa80
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                                Options.info_log: 0x557f632e8a20
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                              Options.statistics: (nil)
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.use_fsync: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                              Options.db_log_dir: 
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                                 Options.wal_dir: db.wal
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.write_buffer_manager: 0x557f622e3900
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.unordered_write: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.row_cache: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                              Options.wal_filter: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.two_write_queues: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.wal_compression: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.atomic_flush: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.max_background_jobs: 4
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.max_background_compactions: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.max_subcompactions: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.max_open_files: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Compression algorithms supported:
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: #011kZSTD supported: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: #011kXpressCompression supported: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: #011kBZip2Compression supported: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: #011kLZ4Compression supported: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: #011kZlibCompression supported: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: #011kLZ4HCCompression supported: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: #011kSnappyCompression supported: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557f622858d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557f622858d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557f622858d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557f622858d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557f622858d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557f622858d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557f622858d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e90c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557f62285a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e90c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557f62285a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e90c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557f62285a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 255f2825-be90-45a3-bc3a-4eac136bcf1c
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610052483429, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610052486215, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610052, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "255f2825-be90-45a3-bc3a-4eac136bcf1c", "db_session_id": "ALGEVV9HATHAALWVAQ6W", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610052487985, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610052, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "255f2825-be90-45a3-bc3a-4eac136bcf1c", "db_session_id": "ALGEVV9HATHAALWVAQ6W", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610052489519, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610052, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "255f2825-be90-45a3-bc3a-4eac136bcf1c", "db_session_id": "ALGEVV9HATHAALWVAQ6W", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610052491132, "job": 1, "event": "recovery_finished"}
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557f634ce000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: DB pointer 0x557f634a4000
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 460.80 MB usag
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: _get_class not permitted to load lua
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: _get_class not permitted to load sdk
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: osd.0 0 load_pgs
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: osd.0 0 load_pgs opened 0 pgs
Dec 13 02:14:12 np0005558317 ceph-osd[85140]: osd.0 0 log_to_monitors true
Dec 13 02:14:12 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0[85136]: 2025-12-13T07:14:12.509+0000 7f1efa0d98c0 -1 osd.0 0 log_to_monitors true
Dec 13 02:14:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Dec 13 02:14:12 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3611488797,v1:192.168.122.100:6803/3611488797]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Dec 13 02:14:12 np0005558317 podman[85676]: 2025-12-13 07:14:12.607426164 +0000 UTC m=+0.027249853 container create 8b1829afb6ea68e40ad4152a3ec3f195859368c38bc48fbab865780ffda484a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_heyrovsky, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 02:14:12 np0005558317 systemd[1]: Started libpod-conmon-8b1829afb6ea68e40ad4152a3ec3f195859368c38bc48fbab865780ffda484a8.scope.
Dec 13 02:14:12 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:12 np0005558317 podman[85676]: 2025-12-13 07:14:12.664074565 +0000 UTC m=+0.083898266 container init 8b1829afb6ea68e40ad4152a3ec3f195859368c38bc48fbab865780ffda484a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_heyrovsky, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 02:14:12 np0005558317 podman[85676]: 2025-12-13 07:14:12.668956565 +0000 UTC m=+0.088780254 container start 8b1829afb6ea68e40ad4152a3ec3f195859368c38bc48fbab865780ffda484a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 02:14:12 np0005558317 podman[85676]: 2025-12-13 07:14:12.669995968 +0000 UTC m=+0.089819659 container attach 8b1829afb6ea68e40ad4152a3ec3f195859368c38bc48fbab865780ffda484a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_heyrovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:12 np0005558317 vibrant_heyrovsky[85689]: 167 167
Dec 13 02:14:12 np0005558317 podman[85676]: 2025-12-13 07:14:12.672830297 +0000 UTC m=+0.092653987 container died 8b1829afb6ea68e40ad4152a3ec3f195859368c38bc48fbab865780ffda484a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 02:14:12 np0005558317 systemd[1]: libpod-8b1829afb6ea68e40ad4152a3ec3f195859368c38bc48fbab865780ffda484a8.scope: Deactivated successfully.
Dec 13 02:14:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:14:12 np0005558317 systemd[1]: var-lib-containers-storage-overlay-a87e8cdb50aa6252ffed7d7608533f64fd6b344e4ea5e650a71d1e5efc99e8e7-merged.mount: Deactivated successfully.
Dec 13 02:14:12 np0005558317 podman[85676]: 2025-12-13 07:14:12.691808898 +0000 UTC m=+0.111632589 container remove 8b1829afb6ea68e40ad4152a3ec3f195859368c38bc48fbab865780ffda484a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_heyrovsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 02:14:12 np0005558317 podman[85676]: 2025-12-13 07:14:12.59642466 +0000 UTC m=+0.016248370 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:12 np0005558317 systemd[1]: libpod-conmon-8b1829afb6ea68e40ad4152a3ec3f195859368c38bc48fbab865780ffda484a8.scope: Deactivated successfully.
Dec 13 02:14:12 np0005558317 podman[85716]: 2025-12-13 07:14:12.870312066 +0000 UTC m=+0.030558042 container create ce843830882d285cb0349a5f8ab649894471c6dc4c6d48b6e0d3e65ecd9edd6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate-test, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:12 np0005558317 systemd[1]: Started libpod-conmon-ce843830882d285cb0349a5f8ab649894471c6dc4c6d48b6e0d3e65ecd9edd6e.scope.
Dec 13 02:14:12 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:12 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28b09b6645b8706d6d3e486dd7e7fe43b8fc00514ec1a2ff49d48b88d12f5cc9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:12 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28b09b6645b8706d6d3e486dd7e7fe43b8fc00514ec1a2ff49d48b88d12f5cc9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:12 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28b09b6645b8706d6d3e486dd7e7fe43b8fc00514ec1a2ff49d48b88d12f5cc9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:12 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28b09b6645b8706d6d3e486dd7e7fe43b8fc00514ec1a2ff49d48b88d12f5cc9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:12 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28b09b6645b8706d6d3e486dd7e7fe43b8fc00514ec1a2ff49d48b88d12f5cc9/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:12 np0005558317 podman[85716]: 2025-12-13 07:14:12.933052001 +0000 UTC m=+0.093297997 container init ce843830882d285cb0349a5f8ab649894471c6dc4c6d48b6e0d3e65ecd9edd6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate-test, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 02:14:12 np0005558317 podman[85716]: 2025-12-13 07:14:12.938642803 +0000 UTC m=+0.098888779 container start ce843830882d285cb0349a5f8ab649894471c6dc4c6d48b6e0d3e65ecd9edd6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate-test, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 02:14:12 np0005558317 podman[85716]: 2025-12-13 07:14:12.939665145 +0000 UTC m=+0.099911121 container attach ce843830882d285cb0349a5f8ab649894471c6dc4c6d48b6e0d3e65ecd9edd6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 02:14:12 np0005558317 podman[85716]: 2025-12-13 07:14:12.85941555 +0000 UTC m=+0.019661546 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:13 np0005558317 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:14:13 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate-test[85731]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 13 02:14:13 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate-test[85731]:                            [--no-systemd] [--no-tmpfs]
Dec 13 02:14:13 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate-test[85731]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 13 02:14:13 np0005558317 systemd[1]: libpod-ce843830882d285cb0349a5f8ab649894471c6dc4c6d48b6e0d3e65ecd9edd6e.scope: Deactivated successfully.
Dec 13 02:14:13 np0005558317 podman[85716]: 2025-12-13 07:14:13.098757469 +0000 UTC m=+0.259003445 container died ce843830882d285cb0349a5f8ab649894471c6dc4c6d48b6e0d3e65ecd9edd6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 02:14:13 np0005558317 systemd[1]: var-lib-containers-storage-overlay-28b09b6645b8706d6d3e486dd7e7fe43b8fc00514ec1a2ff49d48b88d12f5cc9-merged.mount: Deactivated successfully.
Dec 13 02:14:13 np0005558317 podman[85716]: 2025-12-13 07:14:13.121146041 +0000 UTC m=+0.281392017 container remove ce843830882d285cb0349a5f8ab649894471c6dc4c6d48b6e0d3e65ecd9edd6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate-test, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 02:14:13 np0005558317 systemd[1]: libpod-conmon-ce843830882d285cb0349a5f8ab649894471c6dc4c6d48b6e0d3e65ecd9edd6e.scope: Deactivated successfully.
Dec 13 02:14:13 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:13 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:13 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 13 02:14:13 np0005558317 ceph-mon[74928]: Deploying daemon osd.1 on compute-0
Dec 13 02:14:13 np0005558317 ceph-mon[74928]: from='osd.0 [v2:192.168.122.100:6802/3611488797,v1:192.168.122.100:6803/3611488797]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Dec 13 02:14:13 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec 13 02:14:13 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 02:14:13 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3611488797,v1:192.168.122.100:6803/3611488797]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 13 02:14:13 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Dec 13 02:14:13 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Dec 13 02:14:13 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 13 02:14:13 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3611488797,v1:192.168.122.100:6803/3611488797]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 13 02:14:13 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Dec 13 02:14:13 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:14:13 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:14:13 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:14:13 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:14:13 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:14:13 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:14:13 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:14:13 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 02:14:13 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:14:13 np0005558317 systemd[1]: Reloading.
Dec 13 02:14:13 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:14:13 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:14:13 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 13 02:14:13 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 13 02:14:13 np0005558317 systemd[1]: Reloading.
Dec 13 02:14:13 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:14:13 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:14:13 np0005558317 systemd[1]: Starting Ceph osd.1 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de...
Dec 13 02:14:13 np0005558317 podman[85877]: 2025-12-13 07:14:13.9067344 +0000 UTC m=+0.028744924 container create b154fd48736f3e08f17665943c8f20ac3e0e7e0b529e98688c16a073fc15c235 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Dec 13 02:14:13 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:13 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8764e7e1896e1ad689ac466e400826bf8e80526df4d0e7f2862382dd8dce1af0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:13 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8764e7e1896e1ad689ac466e400826bf8e80526df4d0e7f2862382dd8dce1af0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:13 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8764e7e1896e1ad689ac466e400826bf8e80526df4d0e7f2862382dd8dce1af0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:13 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8764e7e1896e1ad689ac466e400826bf8e80526df4d0e7f2862382dd8dce1af0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:13 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8764e7e1896e1ad689ac466e400826bf8e80526df4d0e7f2862382dd8dce1af0/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:13 np0005558317 podman[85877]: 2025-12-13 07:14:13.948248308 +0000 UTC m=+0.070258832 container init b154fd48736f3e08f17665943c8f20ac3e0e7e0b529e98688c16a073fc15c235 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:13 np0005558317 podman[85877]: 2025-12-13 07:14:13.953236086 +0000 UTC m=+0.075246600 container start b154fd48736f3e08f17665943c8f20ac3e0e7e0b529e98688c16a073fc15c235 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 02:14:13 np0005558317 podman[85877]: 2025-12-13 07:14:13.954959236 +0000 UTC m=+0.076969750 container attach b154fd48736f3e08f17665943c8f20ac3e0e7e0b529e98688c16a073fc15c235 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:14:13 np0005558317 podman[85877]: 2025-12-13 07:14:13.894345588 +0000 UTC m=+0.016356122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:14 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate[85889]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:14 np0005558317 bash[85877]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:14 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate[85889]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:14 np0005558317 bash[85877]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:14 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:14:14 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec 13 02:14:14 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 02:14:14 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3611488797,v1:192.168.122.100:6803/3611488797]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 13 02:14:14 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Dec 13 02:14:14 np0005558317 ceph-osd[85140]: osd.0 0 done with init, starting boot process
Dec 13 02:14:14 np0005558317 ceph-osd[85140]: osd.0 0 start_boot
Dec 13 02:14:14 np0005558317 ceph-osd[85140]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 13 02:14:14 np0005558317 ceph-osd[85140]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 13 02:14:14 np0005558317 ceph-osd[85140]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 13 02:14:14 np0005558317 ceph-osd[85140]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 13 02:14:14 np0005558317 ceph-osd[85140]: osd.0 0  bench count 12288000 bsize 4 KiB
Dec 13 02:14:14 np0005558317 ceph-mon[74928]: from='osd.0 [v2:192.168.122.100:6802/3611488797,v1:192.168.122.100:6803/3611488797]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 13 02:14:14 np0005558317 ceph-mon[74928]: from='osd.0 [v2:192.168.122.100:6802/3611488797,v1:192.168.122.100:6803/3611488797]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 13 02:14:14 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Dec 13 02:14:14 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:14:14 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:14:14 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:14:14 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:14:14 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:14:14 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:14:14 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:14:14 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 02:14:14 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:14:14 np0005558317 ceph-mgr[75200]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3611488797; not ready for session (expect reconnect)
Dec 13 02:14:14 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:14:14 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:14:14 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:14:14 np0005558317 lvm[85971]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:14:14 np0005558317 lvm[85971]: VG ceph_vg0 finished
Dec 13 02:14:14 np0005558317 lvm[85973]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:14:14 np0005558317 lvm[85973]: VG ceph_vg1 finished
Dec 13 02:14:14 np0005558317 lvm[85975]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:14:14 np0005558317 lvm[85975]: VG ceph_vg2 finished
Dec 13 02:14:14 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate[85889]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 13 02:14:14 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate[85889]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:14 np0005558317 bash[85877]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 13 02:14:14 np0005558317 bash[85877]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:14 np0005558317 lvm[85977]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:14:14 np0005558317 lvm[85977]: VG ceph_vg2 finished
Dec 13 02:14:14 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate[85889]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:14 np0005558317 bash[85877]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:14 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate[85889]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 13 02:14:14 np0005558317 bash[85877]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 13 02:14:14 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate[85889]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 13 02:14:14 np0005558317 bash[85877]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 13 02:14:14 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate[85889]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 13 02:14:14 np0005558317 bash[85877]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 13 02:14:14 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate[85889]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 13 02:14:14 np0005558317 bash[85877]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 13 02:14:14 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate[85889]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 13 02:14:14 np0005558317 bash[85877]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 13 02:14:14 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate[85889]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 13 02:14:14 np0005558317 bash[85877]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 13 02:14:14 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate[85889]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 13 02:14:14 np0005558317 bash[85877]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 13 02:14:14 np0005558317 systemd[1]: libpod-b154fd48736f3e08f17665943c8f20ac3e0e7e0b529e98688c16a073fc15c235.scope: Deactivated successfully.
Dec 13 02:14:14 np0005558317 systemd[1]: libpod-b154fd48736f3e08f17665943c8f20ac3e0e7e0b529e98688c16a073fc15c235.scope: Consumed 1.132s CPU time.
Dec 13 02:14:14 np0005558317 podman[85877]: 2025-12-13 07:14:14.813904991 +0000 UTC m=+0.935915505 container died b154fd48736f3e08f17665943c8f20ac3e0e7e0b529e98688c16a073fc15c235 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 02:14:14 np0005558317 systemd[1]: var-lib-containers-storage-overlay-8764e7e1896e1ad689ac466e400826bf8e80526df4d0e7f2862382dd8dce1af0-merged.mount: Deactivated successfully.
Dec 13 02:14:14 np0005558317 podman[85877]: 2025-12-13 07:14:14.912084766 +0000 UTC m=+1.034095281 container remove b154fd48736f3e08f17665943c8f20ac3e0e7e0b529e98688c16a073fc15c235 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 02:14:15 np0005558317 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:14:15 np0005558317 podman[86126]: 2025-12-13 07:14:15.120844829 +0000 UTC m=+0.088886333 container create c0e0c03f97b0b2b02555f476cf4558ef6f7c2cd731350718d9262d59a0b7be03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:15 np0005558317 podman[86126]: 2025-12-13 07:14:15.049285543 +0000 UTC m=+0.017327077 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:15 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e51303f4b9f368d744b6dadce3bbf2364b12d9f150d990d2abdc488ca47952/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:15 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e51303f4b9f368d744b6dadce3bbf2364b12d9f150d990d2abdc488ca47952/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:15 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e51303f4b9f368d744b6dadce3bbf2364b12d9f150d990d2abdc488ca47952/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:15 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e51303f4b9f368d744b6dadce3bbf2364b12d9f150d990d2abdc488ca47952/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:15 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e51303f4b9f368d744b6dadce3bbf2364b12d9f150d990d2abdc488ca47952/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:15 np0005558317 podman[86126]: 2025-12-13 07:14:15.217156191 +0000 UTC m=+0.185197715 container init c0e0c03f97b0b2b02555f476cf4558ef6f7c2cd731350718d9262d59a0b7be03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:14:15 np0005558317 podman[86126]: 2025-12-13 07:14:15.221887065 +0000 UTC m=+0.189928569 container start c0e0c03f97b0b2b02555f476cf4558ef6f7c2cd731350718d9262d59a0b7be03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 02:14:15 np0005558317 bash[86126]: c0e0c03f97b0b2b02555f476cf4558ef6f7c2cd731350718d9262d59a0b7be03
Dec 13 02:14:15 np0005558317 systemd[1]: Started Ceph osd.1 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 02:14:15 np0005558317 ceph-mgr[75200]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3611488797; not ready for session (expect reconnect)
Dec 13 02:14:15 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:14:15 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:14:15 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:14:15 np0005558317 ceph-mon[74928]: from='osd.0 [v2:192.168.122.100:6802/3611488797,v1:192.168.122.100:6803/3611488797]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 13 02:14:15 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: pidfile_write: ignore empty --pid-file
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:15 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:14:15 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:14:15 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:15 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec 13 02:14:15 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 13 02:14:15 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:14:15 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:14:15 np0005558317 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Dec 13 02:14:15 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0bc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0bc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0bc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0bc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0bc00 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: load: jerasure load: lrc 
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedb000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedb000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedb000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedb000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluefs mount
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluefs mount shared_bdev_used = 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: RocksDB version: 7.9.2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Git sha 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: DB SUMMARY
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: DB Session ID:  GQTJY16QVIP0A3829K2S
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: CURRENT file:  CURRENT
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                         Options.error_if_exists: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.create_if_missing: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                                     Options.env: 0x560fdecc3c00
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                                Options.info_log: 0x560fdeece900
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                              Options.statistics: (nil)
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.use_fsync: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                              Options.db_log_dir: 
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                                 Options.wal_dir: db.wal
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.write_buffer_manager: 0x560fded74b40
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.unordered_write: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.row_cache: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                              Options.wal_filter: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.two_write_queues: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.wal_compression: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.atomic_flush: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.max_background_jobs: 4
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.max_background_compactions: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.max_subcompactions: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.max_open_files: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Compression algorithms supported:
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: #011kZSTD supported: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: #011kXpressCompression supported: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: #011kBZip2Compression supported: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: #011kLZ4Compression supported: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: #011kZlibCompression supported: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: #011kLZ4HCCompression supported: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: #011kSnappyCompression supported: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeececc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560fdde7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeececc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560fdde7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeececc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560fdde7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeececc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560fdde7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeececc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560fdde7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeececc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560fdde7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeececc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560fdde7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecece0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560fdde7fa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecece0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560fdde7fa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecece0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560fdde7fa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bc03593d-e9d5-4c06-9aa7-16048552921e
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610055565082, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610055566810, "job": 1, "event": "recovery_finished"}
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: freelist init
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: freelist _read_cfg
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluefs umount
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedb000 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedb000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedb000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedb000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bdev(0x560fddedb000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluefs mount
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluefs mount shared_bdev_used = 27262976
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: RocksDB version: 7.9.2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Git sha 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: DB SUMMARY
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: DB Session ID:  GQTJY16QVIP0A3829K2T
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: CURRENT file:  CURRENT
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                         Options.error_if_exists: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.create_if_missing: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                                     Options.env: 0x560fdf0a0af0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                                Options.info_log: 0x560fdeecea80
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                              Options.statistics: (nil)
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.use_fsync: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                              Options.db_log_dir: 
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                                 Options.wal_dir: db.wal
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.write_buffer_manager: 0x560fded75900
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.unordered_write: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.row_cache: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                              Options.wal_filter: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.two_write_queues: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.wal_compression: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.atomic_flush: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.max_background_jobs: 4
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.max_background_compactions: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.max_subcompactions: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.max_open_files: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Compression algorithms supported:
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: #011kZSTD supported: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: #011kXpressCompression supported: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: #011kBZip2Compression supported: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: #011kLZ4Compression supported: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: #011kZlibCompression supported: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: #011kLZ4HCCompression supported: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: #011kSnappyCompression supported: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecec20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560fdde7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecec20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560fdde7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecec20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560fdde7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecec20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560fdde7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecec20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560fdde7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecec20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560fdde7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecec20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560fdde7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecf120)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560fdde7fa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecf120)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560fdde7fa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecf120)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560fdde7fa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bc03593d-e9d5-4c06-9aa7-16048552921e
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610055619423, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610055621167, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610055, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc03593d-e9d5-4c06-9aa7-16048552921e", "db_session_id": "GQTJY16QVIP0A3829K2T", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610055622242, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610055, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc03593d-e9d5-4c06-9aa7-16048552921e", "db_session_id": "GQTJY16QVIP0A3829K2T", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610055625850, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610055, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc03593d-e9d5-4c06-9aa7-16048552921e", "db_session_id": "GQTJY16QVIP0A3829K2T", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610055627460, "job": 1, "event": "recovery_finished"}
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560fdf0ea000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: DB pointer 0x560fdf08a000
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 460.80 MB usag
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: _get_class not permitted to load lua
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: _get_class not permitted to load sdk
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: osd.1 0 load_pgs
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: osd.1 0 load_pgs opened 0 pgs
Dec 13 02:14:15 np0005558317 ceph-osd[86142]: osd.1 0 log_to_monitors true
Dec 13 02:14:15 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1[86138]: 2025-12-13T07:14:15.666+0000 7fe8c07658c0 -1 osd.1 0 log_to_monitors true
Dec 13 02:14:15 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Dec 13 02:14:15 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2712458861,v1:192.168.122.100:6807/2712458861]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Dec 13 02:14:15 np0005558317 podman[86677]: 2025-12-13 07:14:15.7546951 +0000 UTC m=+0.032232880 container create 790f70554662ab953951b7ce61d476c7c80960cebea3b7e06b3d495df7d764be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_kirch, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 02:14:15 np0005558317 systemd[1]: Started libpod-conmon-790f70554662ab953951b7ce61d476c7c80960cebea3b7e06b3d495df7d764be.scope.
Dec 13 02:14:15 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:15 np0005558317 podman[86677]: 2025-12-13 07:14:15.809606627 +0000 UTC m=+0.087144407 container init 790f70554662ab953951b7ce61d476c7c80960cebea3b7e06b3d495df7d764be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_kirch, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:14:15 np0005558317 podman[86677]: 2025-12-13 07:14:15.814720913 +0000 UTC m=+0.092258693 container start 790f70554662ab953951b7ce61d476c7c80960cebea3b7e06b3d495df7d764be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:14:15 np0005558317 podman[86677]: 2025-12-13 07:14:15.817084216 +0000 UTC m=+0.094621996 container attach 790f70554662ab953951b7ce61d476c7c80960cebea3b7e06b3d495df7d764be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:15 np0005558317 wizardly_kirch[86690]: 167 167
Dec 13 02:14:15 np0005558317 systemd[1]: libpod-790f70554662ab953951b7ce61d476c7c80960cebea3b7e06b3d495df7d764be.scope: Deactivated successfully.
Dec 13 02:14:15 np0005558317 podman[86677]: 2025-12-13 07:14:15.81825665 +0000 UTC m=+0.095794430 container died 790f70554662ab953951b7ce61d476c7c80960cebea3b7e06b3d495df7d764be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_kirch, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 02:14:15 np0005558317 systemd[1]: var-lib-containers-storage-overlay-04d2417f7b8ef51f3bcf5bfc7290bfa50f7d2bfec174801dcef185e108939331-merged.mount: Deactivated successfully.
Dec 13 02:14:15 np0005558317 podman[86677]: 2025-12-13 07:14:15.741335433 +0000 UTC m=+0.018873234 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:15 np0005558317 podman[86677]: 2025-12-13 07:14:15.849494009 +0000 UTC m=+0.127031789 container remove 790f70554662ab953951b7ce61d476c7c80960cebea3b7e06b3d495df7d764be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_kirch, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:14:15 np0005558317 systemd[1]: libpod-conmon-790f70554662ab953951b7ce61d476c7c80960cebea3b7e06b3d495df7d764be.scope: Deactivated successfully.
Dec 13 02:14:15 np0005558317 ceph-osd[85140]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 83.597 iops: 21400.840 elapsed_sec: 0.140
Dec 13 02:14:15 np0005558317 ceph-osd[85140]: log_channel(cluster) log [WRN] : OSD bench result of 21400.840372 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 13 02:14:15 np0005558317 ceph-osd[85140]: osd.0 0 waiting for initial osdmap
Dec 13 02:14:15 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0[85136]: 2025-12-13T07:14:15.864+0000 7f1ef605b640 -1 osd.0 0 waiting for initial osdmap
Dec 13 02:14:15 np0005558317 ceph-osd[85140]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec 13 02:14:15 np0005558317 ceph-osd[85140]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec 13 02:14:15 np0005558317 ceph-osd[85140]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec 13 02:14:15 np0005558317 ceph-osd[85140]: osd.0 8 check_osdmap_features require_osd_release unknown -> tentacle
Dec 13 02:14:15 np0005558317 ceph-osd[85140]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 13 02:14:15 np0005558317 ceph-osd[85140]: osd.0 8 set_numa_affinity not setting numa affinity
Dec 13 02:14:15 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0[85136]: 2025-12-13T07:14:15.880+0000 7f1ef0e60640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 13 02:14:15 np0005558317 ceph-osd[85140]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Dec 13 02:14:16 np0005558317 podman[86717]: 2025-12-13 07:14:16.021357413 +0000 UTC m=+0.027836576 container create 9dd977d021968012ed2f01d49383e21218642515724f0dd9c7d07cd716a29eb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate-test, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:16 np0005558317 systemd[1]: Started libpod-conmon-9dd977d021968012ed2f01d49383e21218642515724f0dd9c7d07cd716a29eb7.scope.
Dec 13 02:14:16 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:16 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ac7d6d9d6e66d71397d5d817b98aa2554c009181d79754d5602a994873367a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:16 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ac7d6d9d6e66d71397d5d817b98aa2554c009181d79754d5602a994873367a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:16 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ac7d6d9d6e66d71397d5d817b98aa2554c009181d79754d5602a994873367a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:16 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ac7d6d9d6e66d71397d5d817b98aa2554c009181d79754d5602a994873367a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:16 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ac7d6d9d6e66d71397d5d817b98aa2554c009181d79754d5602a994873367a/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:16 np0005558317 podman[86717]: 2025-12-13 07:14:16.075364099 +0000 UTC m=+0.081843263 container init 9dd977d021968012ed2f01d49383e21218642515724f0dd9c7d07cd716a29eb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 02:14:16 np0005558317 podman[86717]: 2025-12-13 07:14:16.080100744 +0000 UTC m=+0.086579907 container start 9dd977d021968012ed2f01d49383e21218642515724f0dd9c7d07cd716a29eb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 02:14:16 np0005558317 podman[86717]: 2025-12-13 07:14:16.081473305 +0000 UTC m=+0.087952468 container attach 9dd977d021968012ed2f01d49383e21218642515724f0dd9c7d07cd716a29eb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate-test, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 02:14:16 np0005558317 podman[86717]: 2025-12-13 07:14:16.010554573 +0000 UTC m=+0.017033736 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:16 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:14:16 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate-test[86730]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 13 02:14:16 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate-test[86730]:                            [--no-systemd] [--no-tmpfs]
Dec 13 02:14:16 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate-test[86730]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 13 02:14:16 np0005558317 systemd[1]: libpod-9dd977d021968012ed2f01d49383e21218642515724f0dd9c7d07cd716a29eb7.scope: Deactivated successfully.
Dec 13 02:14:16 np0005558317 conmon[86730]: conmon 9dd977d021968012ed2f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9dd977d021968012ed2f01d49383e21218642515724f0dd9c7d07cd716a29eb7.scope/container/memory.events
Dec 13 02:14:16 np0005558317 podman[86717]: 2025-12-13 07:14:16.237112807 +0000 UTC m=+0.243591970 container died 9dd977d021968012ed2f01d49383e21218642515724f0dd9c7d07cd716a29eb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate-test, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:16 np0005558317 systemd[1]: var-lib-containers-storage-overlay-a8ac7d6d9d6e66d71397d5d817b98aa2554c009181d79754d5602a994873367a-merged.mount: Deactivated successfully.
Dec 13 02:14:16 np0005558317 ceph-mgr[75200]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3611488797; not ready for session (expect reconnect)
Dec 13 02:14:16 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:14:16 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:14:16 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:14:16 np0005558317 podman[86717]: 2025-12-13 07:14:16.259372668 +0000 UTC m=+0.265851831 container remove 9dd977d021968012ed2f01d49383e21218642515724f0dd9c7d07cd716a29eb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate-test, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:16 np0005558317 systemd[1]: libpod-conmon-9dd977d021968012ed2f01d49383e21218642515724f0dd9c7d07cd716a29eb7.scope: Deactivated successfully.
Dec 13 02:14:16 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:16 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:16 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 13 02:14:16 np0005558317 ceph-mon[74928]: Deploying daemon osd.2 on compute-0
Dec 13 02:14:16 np0005558317 ceph-mon[74928]: from='osd.1 [v2:192.168.122.100:6806/2712458861,v1:192.168.122.100:6807/2712458861]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Dec 13 02:14:16 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec 13 02:14:16 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 02:14:16 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2712458861,v1:192.168.122.100:6807/2712458861]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 13 02:14:16 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Dec 13 02:14:16 np0005558317 ceph-mon[74928]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3611488797,v1:192.168.122.100:6803/3611488797] boot
Dec 13 02:14:16 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Dec 13 02:14:16 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 13 02:14:16 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2712458861,v1:192.168.122.100:6807/2712458861]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 13 02:14:16 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Dec 13 02:14:16 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:14:16 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:14:16 np0005558317 ceph-osd[85140]: osd.0 9 state: booting -> active
Dec 13 02:14:16 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:14:16 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:14:16 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:14:16 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:14:16 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 02:14:16 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:14:16 np0005558317 systemd[1]: Reloading.
Dec 13 02:14:16 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:14:16 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:14:16 np0005558317 systemd[1]: Reloading.
Dec 13 02:14:16 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 13 02:14:16 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 13 02:14:16 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:14:16 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:14:16 np0005558317 systemd[1]: Starting Ceph osd.2 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de...
Dec 13 02:14:16 np0005558317 podman[86879]: 2025-12-13 07:14:16.988482273 +0000 UTC m=+0.024662278 container create bfa99686f93a1d5bb0e489b51cd2e34c3ce32773640f4988bd149a0bd41d4e2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:17 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:17 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cacee7e2a09edec88a34da32d488a986046fcf55bf48ce4be8ad173be13aa664/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:17 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cacee7e2a09edec88a34da32d488a986046fcf55bf48ce4be8ad173be13aa664/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:17 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cacee7e2a09edec88a34da32d488a986046fcf55bf48ce4be8ad173be13aa664/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:17 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cacee7e2a09edec88a34da32d488a986046fcf55bf48ce4be8ad173be13aa664/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:17 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cacee7e2a09edec88a34da32d488a986046fcf55bf48ce4be8ad173be13aa664/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:17 np0005558317 podman[86879]: 2025-12-13 07:14:17.042869254 +0000 UTC m=+0.079049259 container init bfa99686f93a1d5bb0e489b51cd2e34c3ce32773640f4988bd149a0bd41d4e2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:14:17 np0005558317 ceph-mgr[75200]: [devicehealth INFO root] creating mgr pool
Dec 13 02:14:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Dec 13 02:14:17 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Dec 13 02:14:17 np0005558317 podman[86879]: 2025-12-13 07:14:17.048419598 +0000 UTC m=+0.084599604 container start bfa99686f93a1d5bb0e489b51cd2e34c3ce32773640f4988bd149a0bd41d4e2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:17 np0005558317 podman[86879]: 2025-12-13 07:14:17.051031268 +0000 UTC m=+0.087211294 container attach bfa99686f93a1d5bb0e489b51cd2e34c3ce32773640f4988bd149a0bd41d4e2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:17 np0005558317 podman[86879]: 2025-12-13 07:14:16.978145779 +0000 UTC m=+0.014325794 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:17 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate[86891]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:17 np0005558317 bash[86879]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:17 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate[86891]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:17 np0005558317 bash[86879]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec 13 02:14:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 02:14:17 np0005558317 ceph-mon[74928]: OSD bench result of 21400.840372 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 13 02:14:17 np0005558317 ceph-mon[74928]: from='osd.1 [v2:192.168.122.100:6806/2712458861,v1:192.168.122.100:6807/2712458861]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 13 02:14:17 np0005558317 ceph-mon[74928]: osd.0 [v2:192.168.122.100:6802/3611488797,v1:192.168.122.100:6803/3611488797] boot
Dec 13 02:14:17 np0005558317 ceph-mon[74928]: from='osd.1 [v2:192.168.122.100:6806/2712458861,v1:192.168.122.100:6807/2712458861]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 13 02:14:17 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Dec 13 02:14:17 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2712458861,v1:192.168.122.100:6807/2712458861]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 13 02:14:17 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 13 02:14:17 np0005558317 ceph-osd[86142]: osd.1 0 done with init, starting boot process
Dec 13 02:14:17 np0005558317 ceph-osd[86142]: osd.1 0 start_boot
Dec 13 02:14:17 np0005558317 ceph-osd[86142]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 13 02:14:17 np0005558317 ceph-osd[86142]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 13 02:14:17 np0005558317 ceph-osd[86142]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 13 02:14:17 np0005558317 ceph-osd[86142]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 13 02:14:17 np0005558317 ceph-osd[86142]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec 13 02:14:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Dec 13 02:14:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Dec 13 02:14:17 np0005558317 ceph-osd[85140]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 13 02:14:17 np0005558317 ceph-osd[85140]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec 13 02:14:17 np0005558317 ceph-osd[85140]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 13 02:14:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec 13 02:14:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec 13 02:14:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec 13 02:14:17 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Dec 13 02:14:17 np0005558317 ceph-mgr[75200]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2712458861; not ready for session (expect reconnect)
Dec 13 02:14:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:14:17 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:14:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:14:17 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:14:17 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 02:14:17 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:14:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Dec 13 02:14:17 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Dec 13 02:14:17 np0005558317 lvm[86974]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:14:17 np0005558317 lvm[86974]: VG ceph_vg0 finished
Dec 13 02:14:17 np0005558317 lvm[86977]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:14:17 np0005558317 lvm[86977]: VG ceph_vg1 finished
Dec 13 02:14:17 np0005558317 lvm[86980]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:14:17 np0005558317 lvm[86980]: VG ceph_vg2 finished
Dec 13 02:14:17 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate[86891]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 13 02:14:17 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate[86891]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:17 np0005558317 bash[86879]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 13 02:14:17 np0005558317 bash[86879]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:17 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate[86891]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:17 np0005558317 bash[86879]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:14:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:14:17 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate[86891]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 13 02:14:17 np0005558317 bash[86879]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 13 02:14:17 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate[86891]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec 13 02:14:17 np0005558317 bash[86879]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec 13 02:14:17 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate[86891]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 13 02:14:17 np0005558317 bash[86879]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 13 02:14:17 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate[86891]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec 13 02:14:17 np0005558317 bash[86879]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec 13 02:14:17 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate[86891]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 13 02:14:17 np0005558317 bash[86879]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 13 02:14:17 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate[86891]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 13 02:14:17 np0005558317 bash[86879]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 13 02:14:17 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate[86891]: --> ceph-volume lvm activate successful for osd ID: 2
Dec 13 02:14:17 np0005558317 bash[86879]: --> ceph-volume lvm activate successful for osd ID: 2
Dec 13 02:14:17 np0005558317 systemd[1]: libpod-bfa99686f93a1d5bb0e489b51cd2e34c3ce32773640f4988bd149a0bd41d4e2d.scope: Deactivated successfully.
Dec 13 02:14:17 np0005558317 podman[86879]: 2025-12-13 07:14:17.814050527 +0000 UTC m=+0.850230531 container died bfa99686f93a1d5bb0e489b51cd2e34c3ce32773640f4988bd149a0bd41d4e2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 02:14:17 np0005558317 systemd[1]: libpod-bfa99686f93a1d5bb0e489b51cd2e34c3ce32773640f4988bd149a0bd41d4e2d.scope: Consumed 1.018s CPU time.
Dec 13 02:14:17 np0005558317 systemd[1]: var-lib-containers-storage-overlay-cacee7e2a09edec88a34da32d488a986046fcf55bf48ce4be8ad173be13aa664-merged.mount: Deactivated successfully.
Dec 13 02:14:17 np0005558317 podman[86879]: 2025-12-13 07:14:17.86119458 +0000 UTC m=+0.897374586 container remove bfa99686f93a1d5bb0e489b51cd2e34c3ce32773640f4988bd149a0bd41d4e2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:18 np0005558317 podman[87139]: 2025-12-13 07:14:18.03140086 +0000 UTC m=+0.061555971 container create bb7cd2f636f6ef6017e815fa4141bf45b494cfb9652486980f5606492505725a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 02:14:18 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f187ab4b9e239a28c546dd35fd1006ef1c99f0252f30548ee49ab2fe96259030/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:18 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f187ab4b9e239a28c546dd35fd1006ef1c99f0252f30548ee49ab2fe96259030/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:18 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f187ab4b9e239a28c546dd35fd1006ef1c99f0252f30548ee49ab2fe96259030/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:18 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f187ab4b9e239a28c546dd35fd1006ef1c99f0252f30548ee49ab2fe96259030/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:18 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f187ab4b9e239a28c546dd35fd1006ef1c99f0252f30548ee49ab2fe96259030/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:18 np0005558317 podman[87139]: 2025-12-13 07:14:17.984424351 +0000 UTC m=+0.014579472 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:18 np0005558317 podman[87139]: 2025-12-13 07:14:18.137793065 +0000 UTC m=+0.167948165 container init bb7cd2f636f6ef6017e815fa4141bf45b494cfb9652486980f5606492505725a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 02:14:18 np0005558317 podman[87139]: 2025-12-13 07:14:18.142484624 +0000 UTC m=+0.172639725 container start bb7cd2f636f6ef6017e815fa4141bf45b494cfb9652486980f5606492505725a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:18 np0005558317 bash[87139]: bb7cd2f636f6ef6017e815fa4141bf45b494cfb9652486980f5606492505725a
Dec 13 02:14:18 np0005558317 systemd[1]: Started Ceph osd.2 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: pidfile_write: ignore empty --pid-file
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:14:18 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:14:18 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v21: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:14:18 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:18 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:14:18 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7c00 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: load: jerasure load: lrc 
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:14:18 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec 13 02:14:18 np0005558317 ceph-mgr[75200]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2712458861; not ready for session (expect reconnect)
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:14:18 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:14:18 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:14:18 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 02:14:18 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 13 02:14:18 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Dec 13 02:14:18 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Dec 13 02:14:18 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:14:18 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:14:18 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:14:18 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:14:18 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 02:14:18 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:14:18 np0005558317 ceph-mon[74928]: from='osd.1 [v2:192.168.122.100:6806/2712458861,v1:192.168.122.100:6807/2712458861]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 13 02:14:18 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 13 02:14:18 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Dec 13 02:14:18 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:18 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8f000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8f000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8f000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8f000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluefs mount
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluefs mount shared_bdev_used = 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: RocksDB version: 7.9.2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Git sha 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: DB SUMMARY
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: DB Session ID:  DG2TTK5W7R98U96GYMKG
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: CURRENT file:  CURRENT
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                         Options.error_if_exists: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.create_if_missing: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                                     Options.env: 0x5558b3e37ea0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                                Options.info_log: 0x5558b4ec28a0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                              Options.statistics: (nil)
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.use_fsync: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                              Options.db_log_dir: 
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                                 Options.wal_dir: db.wal
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.write_buffer_manager: 0x5558b3e98b40
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.unordered_write: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.row_cache: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                              Options.wal_filter: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.two_write_queues: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.wal_compression: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.atomic_flush: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.max_background_jobs: 4
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.max_background_compactions: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.max_subcompactions: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.max_open_files: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Compression algorithms supported:
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: #011kZSTD supported: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: #011kXpressCompression supported: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: #011kBZip2Compression supported: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: #011kLZ4Compression supported: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: #011kZlibCompression supported: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: #011kLZ4HCCompression supported: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: #011kSnappyCompression supported: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5558b3e3b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5558b3e3b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5558b3e3b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5558b3e3b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5558b3e3b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5558b3e3b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5558b3e3b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5558b3e3ba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5558b3e3ba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5558b3e3ba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 97da3cf1-8819-480c-976a-60b9e4004bb7
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610058392205, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610058393627, "job": 1, "event": "recovery_finished"}
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: freelist init
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: freelist _read_cfg
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluefs umount
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8f000 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8f000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8f000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8f000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bdev(0x5558b3e8f000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluefs mount
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluefs mount shared_bdev_used = 27262976
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: RocksDB version: 7.9.2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Git sha 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: DB SUMMARY
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: DB Session ID:  DG2TTK5W7R98U96GYMKH
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: CURRENT file:  CURRENT
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                         Options.error_if_exists: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.create_if_missing: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                                     Options.env: 0x5558b4f627e0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                                Options.info_log: 0x5558b4ec2a40
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                              Options.statistics: (nil)
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.use_fsync: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                              Options.db_log_dir: 
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                                 Options.wal_dir: db.wal
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.write_buffer_manager: 0x5558b3e99900
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.unordered_write: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.row_cache: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                              Options.wal_filter: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.two_write_queues: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.wal_compression: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.atomic_flush: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.max_background_jobs: 4
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.max_background_compactions: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.max_subcompactions: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.max_open_files: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Compression algorithms supported:
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: #011kZSTD supported: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: #011kXpressCompression supported: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: #011kBZip2Compression supported: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: #011kLZ4Compression supported: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: #011kZlibCompression supported: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: #011kLZ4HCCompression supported: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: #011kSnappyCompression supported: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5558b3e3b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5558b3e3b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5558b3e3b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5558b3e3b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5558b3e3b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5558b3e3b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5558b3e3b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec30c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5558b3e3ba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec30c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5558b3e3ba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec30c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5558b3e3ba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 97da3cf1-8819-480c-976a-60b9e4004bb7
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610058472256, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610058475804, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610058, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "97da3cf1-8819-480c-976a-60b9e4004bb7", "db_session_id": "DG2TTK5W7R98U96GYMKH", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610058480216, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610058, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "97da3cf1-8819-480c-976a-60b9e4004bb7", "db_session_id": "DG2TTK5W7R98U96GYMKH", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610058482480, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610058, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "97da3cf1-8819-480c-976a-60b9e4004bb7", "db_session_id": "DG2TTK5W7R98U96GYMKH", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610058483145, "job": 1, "event": "recovery_finished"}
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5558b50a6000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: DB pointer 0x5558b507e000
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: _get_class not permitted to load lua
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 3 writes, 4 keys, 3 commit groups, 1.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 2 writes, 0 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3 writes, 4 keys, 3 commit groups, 1.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 2 writes, 0 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache 
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: _get_class not permitted to load sdk
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: osd.2 0 load_pgs
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: osd.2 0 load_pgs opened 0 pgs
Dec 13 02:14:18 np0005558317 ceph-osd[87155]: osd.2 0 log_to_monitors true
Dec 13 02:14:18 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2[87151]: 2025-12-13T07:14:18.495+0000 7f6c509578c0 -1 osd.2 0 log_to_monitors true
Dec 13 02:14:18 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Dec 13 02:14:18 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2380119328,v1:192.168.122.100:6811/2380119328]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Dec 13 02:14:18 np0005558317 podman[87668]: 2025-12-13 07:14:18.567421955 +0000 UTC m=+0.026528256 container create 6b19f4150266ed25e89d4aca3321af0504dcacf9c9abcb0add7046c6464268d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_goldwasser, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:18 np0005558317 systemd[1]: Started libpod-conmon-6b19f4150266ed25e89d4aca3321af0504dcacf9c9abcb0add7046c6464268d8.scope.
Dec 13 02:14:18 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:18 np0005558317 podman[87668]: 2025-12-13 07:14:18.613921819 +0000 UTC m=+0.073028140 container init 6b19f4150266ed25e89d4aca3321af0504dcacf9c9abcb0add7046c6464268d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_goldwasser, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:18 np0005558317 podman[87668]: 2025-12-13 07:14:18.61832175 +0000 UTC m=+0.077428052 container start 6b19f4150266ed25e89d4aca3321af0504dcacf9c9abcb0add7046c6464268d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_goldwasser, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 02:14:18 np0005558317 podman[87668]: 2025-12-13 07:14:18.619277608 +0000 UTC m=+0.078383908 container attach 6b19f4150266ed25e89d4aca3321af0504dcacf9c9abcb0add7046c6464268d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 02:14:18 np0005558317 suspicious_goldwasser[87680]: 167 167
Dec 13 02:14:18 np0005558317 systemd[1]: libpod-6b19f4150266ed25e89d4aca3321af0504dcacf9c9abcb0add7046c6464268d8.scope: Deactivated successfully.
Dec 13 02:14:18 np0005558317 podman[87668]: 2025-12-13 07:14:18.622085817 +0000 UTC m=+0.081192118 container died 6b19f4150266ed25e89d4aca3321af0504dcacf9c9abcb0add7046c6464268d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 02:14:18 np0005558317 systemd[1]: var-lib-containers-storage-overlay-674f3bc22c8b797aa085e3db9b9c775aff8f2d67d76947734cbc0b52319a4007-merged.mount: Deactivated successfully.
Dec 13 02:14:18 np0005558317 podman[87668]: 2025-12-13 07:14:18.644103082 +0000 UTC m=+0.103209382 container remove 6b19f4150266ed25e89d4aca3321af0504dcacf9c9abcb0add7046c6464268d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_goldwasser, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 02:14:18 np0005558317 podman[87668]: 2025-12-13 07:14:18.557350149 +0000 UTC m=+0.016456470 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:18 np0005558317 systemd[1]: libpod-conmon-6b19f4150266ed25e89d4aca3321af0504dcacf9c9abcb0add7046c6464268d8.scope: Deactivated successfully.
Dec 13 02:14:18 np0005558317 podman[87703]: 2025-12-13 07:14:18.759134869 +0000 UTC m=+0.028412038 container create 104e799054e77dc9fda0f6cdd84955a1898aa5d6542273869a23b5473570145c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_khorana, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:18 np0005558317 systemd[1]: Started libpod-conmon-104e799054e77dc9fda0f6cdd84955a1898aa5d6542273869a23b5473570145c.scope.
Dec 13 02:14:18 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:18 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/268cd73b7ebc14ea92a5f7cbb44b87192cc16ae983dce3338d9be445779cd73e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:18 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/268cd73b7ebc14ea92a5f7cbb44b87192cc16ae983dce3338d9be445779cd73e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:18 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/268cd73b7ebc14ea92a5f7cbb44b87192cc16ae983dce3338d9be445779cd73e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:18 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/268cd73b7ebc14ea92a5f7cbb44b87192cc16ae983dce3338d9be445779cd73e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:18 np0005558317 podman[87703]: 2025-12-13 07:14:18.812957329 +0000 UTC m=+0.082234498 container init 104e799054e77dc9fda0f6cdd84955a1898aa5d6542273869a23b5473570145c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_khorana, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:18 np0005558317 podman[87703]: 2025-12-13 07:14:18.817493266 +0000 UTC m=+0.086770435 container start 104e799054e77dc9fda0f6cdd84955a1898aa5d6542273869a23b5473570145c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_khorana, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:14:18 np0005558317 podman[87703]: 2025-12-13 07:14:18.818521499 +0000 UTC m=+0.087798669 container attach 104e799054e77dc9fda0f6cdd84955a1898aa5d6542273869a23b5473570145c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_khorana, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:18 np0005558317 podman[87703]: 2025-12-13 07:14:18.746822982 +0000 UTC m=+0.016100161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:19 np0005558317 ceph-osd[86142]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 102.095 iops: 26136.199 elapsed_sec: 0.115
Dec 13 02:14:19 np0005558317 ceph-osd[86142]: log_channel(cluster) log [WRN] : OSD bench result of 26136.199394 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 13 02:14:19 np0005558317 ceph-osd[86142]: osd.1 0 waiting for initial osdmap
Dec 13 02:14:19 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1[86138]: 2025-12-13T07:14:19.057+0000 7fe8bc6e7640 -1 osd.1 0 waiting for initial osdmap
Dec 13 02:14:19 np0005558317 ceph-osd[86142]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 13 02:14:19 np0005558317 ceph-osd[86142]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec 13 02:14:19 np0005558317 ceph-osd[86142]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 13 02:14:19 np0005558317 ceph-osd[86142]: osd.1 11 check_osdmap_features require_osd_release unknown -> tentacle
Dec 13 02:14:19 np0005558317 ceph-osd[86142]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 13 02:14:19 np0005558317 ceph-osd[86142]: osd.1 11 set_numa_affinity not setting numa affinity
Dec 13 02:14:19 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1[86138]: 2025-12-13T07:14:19.068+0000 7fe8b74ec640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 13 02:14:19 np0005558317 ceph-osd[86142]: osd.1 11 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Dec 13 02:14:19 np0005558317 ceph-mgr[75200]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2712458861; not ready for session (expect reconnect)
Dec 13 02:14:19 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:14:19 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:14:19 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 02:14:19 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec 13 02:14:19 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 13 02:14:19 np0005558317 ceph-mon[74928]: from='osd.2 [v2:192.168.122.100:6810/2380119328,v1:192.168.122.100:6811/2380119328]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Dec 13 02:14:19 np0005558317 lvm[87791]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:14:19 np0005558317 lvm[87791]: VG ceph_vg0 finished
Dec 13 02:14:19 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2380119328,v1:192.168.122.100:6811/2380119328]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 13 02:14:19 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e12 e12: 3 total, 2 up, 3 in
Dec 13 02:14:19 np0005558317 ceph-mon[74928]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/2712458861,v1:192.168.122.100:6807/2712458861] boot
Dec 13 02:14:19 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 2 up, 3 in
Dec 13 02:14:19 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 13 02:14:19 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2380119328,v1:192.168.122.100:6811/2380119328]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 13 02:14:19 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e12 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Dec 13 02:14:19 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:14:19 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:14:19 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:14:19 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:14:19 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:14:19 np0005558317 ceph-osd[86142]: osd.1 12 state: booting -> active
Dec 13 02:14:19 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:19 np0005558317 lvm[87794]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:14:19 np0005558317 lvm[87794]: VG ceph_vg1 finished
Dec 13 02:14:19 np0005558317 lvm[87797]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:14:19 np0005558317 lvm[87797]: VG ceph_vg2 finished
Dec 13 02:14:19 np0005558317 lvm[87798]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:14:19 np0005558317 lvm[87798]: VG ceph_vg0 finished
Dec 13 02:14:19 np0005558317 friendly_khorana[87717]: {}
Dec 13 02:14:19 np0005558317 lvm[87801]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:14:19 np0005558317 lvm[87801]: VG ceph_vg2 finished
Dec 13 02:14:19 np0005558317 systemd[1]: libpod-104e799054e77dc9fda0f6cdd84955a1898aa5d6542273869a23b5473570145c.scope: Deactivated successfully.
Dec 13 02:14:19 np0005558317 podman[87703]: 2025-12-13 07:14:19.421842934 +0000 UTC m=+0.691120103 container died 104e799054e77dc9fda0f6cdd84955a1898aa5d6542273869a23b5473570145c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 02:14:19 np0005558317 systemd[1]: var-lib-containers-storage-overlay-268cd73b7ebc14ea92a5f7cbb44b87192cc16ae983dce3338d9be445779cd73e-merged.mount: Deactivated successfully.
Dec 13 02:14:19 np0005558317 podman[87703]: 2025-12-13 07:14:19.44739825 +0000 UTC m=+0.716675419 container remove 104e799054e77dc9fda0f6cdd84955a1898aa5d6542273869a23b5473570145c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:19 np0005558317 lvm[87809]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:14:19 np0005558317 lvm[87809]: VG ceph_vg2 finished
Dec 13 02:14:19 np0005558317 systemd[1]: libpod-conmon-104e799054e77dc9fda0f6cdd84955a1898aa5d6542273869a23b5473570145c.scope: Deactivated successfully.
Dec 13 02:14:19 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:14:19 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:19 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:14:19 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:19 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 13 02:14:19 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 13 02:14:19 np0005558317 podman[87924]: 2025-12-13 07:14:19.937579734 +0000 UTC m=+0.040421156 container exec 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:20 np0005558317 podman[87924]: 2025-12-13 07:14:20.015120706 +0000 UTC m=+0.117962138 container exec_died 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 02:14:20 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v24: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 13 02:14:20 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec 13 02:14:20 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2380119328,v1:192.168.122.100:6811/2380119328]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 13 02:14:20 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Dec 13 02:14:20 np0005558317 ceph-osd[87155]: osd.2 0 done with init, starting boot process
Dec 13 02:14:20 np0005558317 ceph-osd[87155]: osd.2 0 start_boot
Dec 13 02:14:20 np0005558317 ceph-osd[87155]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 13 02:14:20 np0005558317 ceph-osd[87155]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 13 02:14:20 np0005558317 ceph-osd[87155]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 13 02:14:20 np0005558317 ceph-osd[87155]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 13 02:14:20 np0005558317 ceph-osd[87155]: osd.2 0  bench count 12288000 bsize 4 KiB
Dec 13 02:14:20 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Dec 13 02:14:20 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:14:20 np0005558317 ceph-mon[74928]: OSD bench result of 26136.199394 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 13 02:14:20 np0005558317 ceph-mon[74928]: from='osd.2 [v2:192.168.122.100:6810/2380119328,v1:192.168.122.100:6811/2380119328]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 13 02:14:20 np0005558317 ceph-mon[74928]: osd.1 [v2:192.168.122.100:6806/2712458861,v1:192.168.122.100:6807/2712458861] boot
Dec 13 02:14:20 np0005558317 ceph-mon[74928]: from='osd.2 [v2:192.168.122.100:6810/2380119328,v1:192.168.122.100:6811/2380119328]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 13 02:14:20 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:20 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:20 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:14:20 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:14:20 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:20 np0005558317 ceph-mgr[75200]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2380119328; not ready for session (expect reconnect)
Dec 13 02:14:20 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:14:20 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:14:20 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:14:20 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:14:20 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:20 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:14:20 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:20 np0005558317 podman[88107]: 2025-12-13 07:14:20.78627064 +0000 UTC m=+0.032005463 container create a6848fc4b6387853596bddbb04a0e395a3e3330d54fe61cd273ecfb7718aceae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 02:14:20 np0005558317 systemd[1]: Started libpod-conmon-a6848fc4b6387853596bddbb04a0e395a3e3330d54fe61cd273ecfb7718aceae.scope.
Dec 13 02:14:20 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:20 np0005558317 podman[88107]: 2025-12-13 07:14:20.845761236 +0000 UTC m=+0.091496059 container init a6848fc4b6387853596bddbb04a0e395a3e3330d54fe61cd273ecfb7718aceae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_mclean, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:14:20 np0005558317 podman[88107]: 2025-12-13 07:14:20.850642284 +0000 UTC m=+0.096377106 container start a6848fc4b6387853596bddbb04a0e395a3e3330d54fe61cd273ecfb7718aceae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:20 np0005558317 podman[88107]: 2025-12-13 07:14:20.851894346 +0000 UTC m=+0.097629169 container attach a6848fc4b6387853596bddbb04a0e395a3e3330d54fe61cd273ecfb7718aceae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:20 np0005558317 clever_mclean[88120]: 167 167
Dec 13 02:14:20 np0005558317 systemd[1]: libpod-a6848fc4b6387853596bddbb04a0e395a3e3330d54fe61cd273ecfb7718aceae.scope: Deactivated successfully.
Dec 13 02:14:20 np0005558317 podman[88107]: 2025-12-13 07:14:20.854178401 +0000 UTC m=+0.099913224 container died a6848fc4b6387853596bddbb04a0e395a3e3330d54fe61cd273ecfb7718aceae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_mclean, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 02:14:20 np0005558317 systemd[1]: var-lib-containers-storage-overlay-115ad9d993227b1bdcfb6134d52231060e8a16bb6bebebdd0ef03e3f6f6b1b87-merged.mount: Deactivated successfully.
Dec 13 02:14:20 np0005558317 podman[88107]: 2025-12-13 07:14:20.771869735 +0000 UTC m=+0.017604579 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:20 np0005558317 podman[88107]: 2025-12-13 07:14:20.87260803 +0000 UTC m=+0.118342853 container remove a6848fc4b6387853596bddbb04a0e395a3e3330d54fe61cd273ecfb7718aceae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:14:20 np0005558317 systemd[1]: libpod-conmon-a6848fc4b6387853596bddbb04a0e395a3e3330d54fe61cd273ecfb7718aceae.scope: Deactivated successfully.
Dec 13 02:14:20 np0005558317 podman[88142]: 2025-12-13 07:14:20.987087059 +0000 UTC m=+0.030340243 container create 4e6f40cc9afcebe8c713055ec15dc02d1be8339b30d3acadfcd040a4f86b5fe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 02:14:21 np0005558317 systemd[1]: Started libpod-conmon-4e6f40cc9afcebe8c713055ec15dc02d1be8339b30d3acadfcd040a4f86b5fe0.scope.
Dec 13 02:14:21 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:21 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a2331dbecacd27d98ec61310c1b6bd002afd858b5010e7fd1335031bae80cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:21 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a2331dbecacd27d98ec61310c1b6bd002afd858b5010e7fd1335031bae80cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:21 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a2331dbecacd27d98ec61310c1b6bd002afd858b5010e7fd1335031bae80cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:21 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a2331dbecacd27d98ec61310c1b6bd002afd858b5010e7fd1335031bae80cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:21 np0005558317 podman[88142]: 2025-12-13 07:14:21.037483518 +0000 UTC m=+0.080736712 container init 4e6f40cc9afcebe8c713055ec15dc02d1be8339b30d3acadfcd040a4f86b5fe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 02:14:21 np0005558317 podman[88142]: 2025-12-13 07:14:21.042832654 +0000 UTC m=+0.086085837 container start 4e6f40cc9afcebe8c713055ec15dc02d1be8339b30d3acadfcd040a4f86b5fe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_diffie, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:21 np0005558317 podman[88142]: 2025-12-13 07:14:21.044090058 +0000 UTC m=+0.087343262 container attach 4e6f40cc9afcebe8c713055ec15dc02d1be8339b30d3acadfcd040a4f86b5fe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 02:14:21 np0005558317 podman[88142]: 2025-12-13 07:14:20.973174872 +0000 UTC m=+0.016428076 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec 13 02:14:21 np0005558317 ceph-mgr[75200]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2380119328; not ready for session (expect reconnect)
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:14:21 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: from='osd.2 [v2:192.168.122.100:6810/2380119328,v1:192.168.122.100:6811/2380119328]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:21 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]: [
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:    {
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:        "available": false,
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:        "being_replaced": false,
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:        "ceph_device_lvm": false,
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:        "lsm_data": {},
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:        "lvs": [],
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:        "path": "/dev/sr0",
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:        "rejected_reasons": [
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            "Insufficient space (<5GB)",
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            "Has a FileSystem"
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:        ],
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:        "sys_api": {
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            "actuators": null,
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            "device_nodes": [
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:                "sr0"
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            ],
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            "devname": "sr0",
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            "human_readable_size": "474.00 KB",
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            "id_bus": "ata",
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            "model": "QEMU DVD-ROM",
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            "nr_requests": "64",
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            "parent": "/dev/sr0",
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            "partitions": {},
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            "path": "/dev/sr0",
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            "removable": "1",
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            "rev": "2.5+",
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            "ro": "0",
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            "rotational": "1",
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            "sas_address": "",
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            "sas_device_handle": "",
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            "scheduler_mode": "mq-deadline",
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            "sectors": 0,
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            "sectorsize": "2048",
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            "size": 485376.0,
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            "support_discard": "2048",
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            "type": "disk",
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:            "vendor": "QEMU"
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:        }
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]:    }
Dec 13 02:14:21 np0005558317 trusting_diffie[88155]: ]
Dec 13 02:14:21 np0005558317 systemd[1]: libpod-4e6f40cc9afcebe8c713055ec15dc02d1be8339b30d3acadfcd040a4f86b5fe0.scope: Deactivated successfully.
Dec 13 02:14:21 np0005558317 podman[88142]: 2025-12-13 07:14:21.439200607 +0000 UTC m=+0.482453801 container died 4e6f40cc9afcebe8c713055ec15dc02d1be8339b30d3acadfcd040a4f86b5fe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_diffie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:21 np0005558317 systemd[1]: var-lib-containers-storage-overlay-23a2331dbecacd27d98ec61310c1b6bd002afd858b5010e7fd1335031bae80cc-merged.mount: Deactivated successfully.
Dec 13 02:14:21 np0005558317 podman[88142]: 2025-12-13 07:14:21.460364545 +0000 UTC m=+0.503617729 container remove 4e6f40cc9afcebe8c713055ec15dc02d1be8339b30d3acadfcd040a4f86b5fe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Dec 13 02:14:21 np0005558317 systemd[1]: libpod-conmon-4e6f40cc9afcebe8c713055ec15dc02d1be8339b30d3acadfcd040a4f86b5fe0.scope: Deactivated successfully.
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 13 02:14:21 np0005558317 ceph-mgr[75200]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43932k
Dec 13 02:14:21 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43932k
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 13 02:14:21 np0005558317 ceph-mgr[75200]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44986777: error parsing value: Value '44986777' is below minimum 939524096
Dec 13 02:14:21 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44986777: error parsing value: Value '44986777' is below minimum 939524096
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:14:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:14:21 np0005558317 podman[88876]: 2025-12-13 07:14:21.896728407 +0000 UTC m=+0.031330757 container create 3448dae9ceadd1e3bf75f02ce3f97ffb39c4f5b0e234e43c9a78a41e447ff426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:21 np0005558317 systemd[1]: Started libpod-conmon-3448dae9ceadd1e3bf75f02ce3f97ffb39c4f5b0e234e43c9a78a41e447ff426.scope.
Dec 13 02:14:21 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:21 np0005558317 podman[88876]: 2025-12-13 07:14:21.953396336 +0000 UTC m=+0.087998686 container init 3448dae9ceadd1e3bf75f02ce3f97ffb39c4f5b0e234e43c9a78a41e447ff426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_turing, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:21 np0005558317 podman[88876]: 2025-12-13 07:14:21.958133148 +0000 UTC m=+0.092735498 container start 3448dae9ceadd1e3bf75f02ce3f97ffb39c4f5b0e234e43c9a78a41e447ff426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:14:21 np0005558317 podman[88876]: 2025-12-13 07:14:21.959170939 +0000 UTC m=+0.093773289 container attach 3448dae9ceadd1e3bf75f02ce3f97ffb39c4f5b0e234e43c9a78a41e447ff426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_turing, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:21 np0005558317 pedantic_turing[88889]: 167 167
Dec 13 02:14:21 np0005558317 systemd[1]: libpod-3448dae9ceadd1e3bf75f02ce3f97ffb39c4f5b0e234e43c9a78a41e447ff426.scope: Deactivated successfully.
Dec 13 02:14:21 np0005558317 podman[88876]: 2025-12-13 07:14:21.961769302 +0000 UTC m=+0.096371652 container died 3448dae9ceadd1e3bf75f02ce3f97ffb39c4f5b0e234e43c9a78a41e447ff426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_turing, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:21 np0005558317 systemd[1]: var-lib-containers-storage-overlay-9159bef3878e74b6dc18d55add062e229faa0b69c9d9d0525d020f868f4223f1-merged.mount: Deactivated successfully.
Dec 13 02:14:21 np0005558317 podman[88876]: 2025-12-13 07:14:21.88365166 +0000 UTC m=+0.018254030 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:21 np0005558317 podman[88876]: 2025-12-13 07:14:21.985741178 +0000 UTC m=+0.120343528 container remove 3448dae9ceadd1e3bf75f02ce3f97ffb39c4f5b0e234e43c9a78a41e447ff426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:22 np0005558317 systemd[1]: libpod-conmon-3448dae9ceadd1e3bf75f02ce3f97ffb39c4f5b0e234e43c9a78a41e447ff426.scope: Deactivated successfully.
Dec 13 02:14:22 np0005558317 ceph-osd[87155]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 103.165 iops: 26410.155 elapsed_sec: 0.114
Dec 13 02:14:22 np0005558317 ceph-osd[87155]: log_channel(cluster) log [WRN] : OSD bench result of 26410.155043 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 13 02:14:22 np0005558317 ceph-osd[87155]: osd.2 0 waiting for initial osdmap
Dec 13 02:14:22 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2[87151]: 2025-12-13T07:14:22.092+0000 7f6c4c8d9640 -1 osd.2 0 waiting for initial osdmap
Dec 13 02:14:22 np0005558317 ceph-osd[87155]: osd.2 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 13 02:14:22 np0005558317 ceph-osd[87155]: osd.2 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec 13 02:14:22 np0005558317 ceph-osd[87155]: osd.2 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 13 02:14:22 np0005558317 ceph-osd[87155]: osd.2 14 check_osdmap_features require_osd_release unknown -> tentacle
Dec 13 02:14:22 np0005558317 ceph-osd[87155]: osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 13 02:14:22 np0005558317 ceph-osd[87155]: osd.2 14 set_numa_affinity not setting numa affinity
Dec 13 02:14:22 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2[87151]: 2025-12-13T07:14:22.107+0000 7f6c476de640 -1 osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 13 02:14:22 np0005558317 ceph-osd[87155]: osd.2 14 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Dec 13 02:14:22 np0005558317 podman[88911]: 2025-12-13 07:14:22.121050534 +0000 UTC m=+0.033864060 container create 8cdad2bf0d0133433f1494bb20e6160dfcfb2ff8369a01c00998926ffb8aa3d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 02:14:22 np0005558317 systemd[1]: Started libpod-conmon-8cdad2bf0d0133433f1494bb20e6160dfcfb2ff8369a01c00998926ffb8aa3d8.scope.
Dec 13 02:14:22 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:22 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c33e87a03dfce41c163476a17775478e3803d0f65a764c9dbd33138215c4cc36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:22 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c33e87a03dfce41c163476a17775478e3803d0f65a764c9dbd33138215c4cc36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:22 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c33e87a03dfce41c163476a17775478e3803d0f65a764c9dbd33138215c4cc36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:22 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c33e87a03dfce41c163476a17775478e3803d0f65a764c9dbd33138215c4cc36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:22 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c33e87a03dfce41c163476a17775478e3803d0f65a764c9dbd33138215c4cc36/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:22 np0005558317 podman[88911]: 2025-12-13 07:14:22.181281278 +0000 UTC m=+0.094094815 container init 8cdad2bf0d0133433f1494bb20e6160dfcfb2ff8369a01c00998926ffb8aa3d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chandrasekhar, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:22 np0005558317 podman[88911]: 2025-12-13 07:14:22.187067774 +0000 UTC m=+0.099881301 container start 8cdad2bf0d0133433f1494bb20e6160dfcfb2ff8369a01c00998926ffb8aa3d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:22 np0005558317 podman[88911]: 2025-12-13 07:14:22.188072212 +0000 UTC m=+0.100885738 container attach 8cdad2bf0d0133433f1494bb20e6160dfcfb2ff8369a01c00998926ffb8aa3d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:22 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v27: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 13 02:14:22 np0005558317 podman[88911]: 2025-12-13 07:14:22.108873266 +0000 UTC m=+0.021686792 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:22 np0005558317 ceph-mgr[75200]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2380119328; not ready for session (expect reconnect)
Dec 13 02:14:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:14:22 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:14:22 np0005558317 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:14:22 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:22 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:22 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 13 02:14:22 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 13 02:14:22 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 13 02:14:22 np0005558317 ceph-mon[74928]: Adjusting osd_memory_target on compute-0 to 43932k
Dec 13 02:14:22 np0005558317 ceph-mon[74928]: Unable to set osd_memory_target on compute-0 to 44986777: error parsing value: Value '44986777' is below minimum 939524096
Dec 13 02:14:22 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:14:22 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:22 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:14:22 np0005558317 quirky_chandrasekhar[88925]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:14:22 np0005558317 quirky_chandrasekhar[88925]: --> All data devices are unavailable
Dec 13 02:14:22 np0005558317 systemd[1]: libpod-8cdad2bf0d0133433f1494bb20e6160dfcfb2ff8369a01c00998926ffb8aa3d8.scope: Deactivated successfully.
Dec 13 02:14:22 np0005558317 podman[88911]: 2025-12-13 07:14:22.558572211 +0000 UTC m=+0.471385737 container died 8cdad2bf0d0133433f1494bb20e6160dfcfb2ff8369a01c00998926ffb8aa3d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec 13 02:14:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e15 e15: 3 total, 3 up, 3 in
Dec 13 02:14:22 np0005558317 systemd[1]: var-lib-containers-storage-overlay-c33e87a03dfce41c163476a17775478e3803d0f65a764c9dbd33138215c4cc36-merged.mount: Deactivated successfully.
Dec 13 02:14:22 np0005558317 ceph-mon[74928]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/2380119328,v1:192.168.122.100:6811/2380119328] boot
Dec 13 02:14:22 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 3 up, 3 in
Dec 13 02:14:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:14:22 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:14:22 np0005558317 ceph-osd[87155]: osd.2 15 state: booting -> active
Dec 13 02:14:22 np0005558317 podman[88911]: 2025-12-13 07:14:22.582637763 +0000 UTC m=+0.495451289 container remove 8cdad2bf0d0133433f1494bb20e6160dfcfb2ff8369a01c00998926ffb8aa3d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chandrasekhar, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:22 np0005558317 systemd[1]: libpod-conmon-8cdad2bf0d0133433f1494bb20e6160dfcfb2ff8369a01c00998926ffb8aa3d8.scope: Deactivated successfully.
Dec 13 02:14:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:14:22 np0005558317 podman[89016]: 2025-12-13 07:14:22.925591659 +0000 UTC m=+0.028777278 container create 2289be039e66d01ad22d93a9946ac1298f4cfec2d97b51910274d67b22398b41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_banach, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 02:14:22 np0005558317 systemd[1]: Started libpod-conmon-2289be039e66d01ad22d93a9946ac1298f4cfec2d97b51910274d67b22398b41.scope.
Dec 13 02:14:22 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:22 np0005558317 podman[89016]: 2025-12-13 07:14:22.97408905 +0000 UTC m=+0.077274680 container init 2289be039e66d01ad22d93a9946ac1298f4cfec2d97b51910274d67b22398b41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_banach, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 02:14:22 np0005558317 podman[89016]: 2025-12-13 07:14:22.978958693 +0000 UTC m=+0.082144302 container start 2289be039e66d01ad22d93a9946ac1298f4cfec2d97b51910274d67b22398b41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_banach, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:22 np0005558317 podman[89016]: 2025-12-13 07:14:22.979945157 +0000 UTC m=+0.083130766 container attach 2289be039e66d01ad22d93a9946ac1298f4cfec2d97b51910274d67b22398b41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_banach, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:22 np0005558317 elastic_banach[89029]: 167 167
Dec 13 02:14:22 np0005558317 systemd[1]: libpod-2289be039e66d01ad22d93a9946ac1298f4cfec2d97b51910274d67b22398b41.scope: Deactivated successfully.
Dec 13 02:14:22 np0005558317 conmon[89029]: conmon 2289be039e66d01ad22d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2289be039e66d01ad22d93a9946ac1298f4cfec2d97b51910274d67b22398b41.scope/container/memory.events
Dec 13 02:14:22 np0005558317 podman[89016]: 2025-12-13 07:14:22.983046196 +0000 UTC m=+0.086231805 container died 2289be039e66d01ad22d93a9946ac1298f4cfec2d97b51910274d67b22398b41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 02:14:22 np0005558317 systemd[1]: var-lib-containers-storage-overlay-1e05c93f0dd4e8c97f4f862216e32f95434ebb8ab9c1073283838b4de6b6d320-merged.mount: Deactivated successfully.
Dec 13 02:14:23 np0005558317 podman[89016]: 2025-12-13 07:14:23.010429433 +0000 UTC m=+0.113615042 container remove 2289be039e66d01ad22d93a9946ac1298f4cfec2d97b51910274d67b22398b41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:23 np0005558317 podman[89016]: 2025-12-13 07:14:22.913992759 +0000 UTC m=+0.017178388 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:23 np0005558317 systemd[1]: libpod-conmon-2289be039e66d01ad22d93a9946ac1298f4cfec2d97b51910274d67b22398b41.scope: Deactivated successfully.
Dec 13 02:14:23 np0005558317 podman[89050]: 2025-12-13 07:14:23.125321485 +0000 UTC m=+0.029495248 container create a656fe7e43aa080dc5035a558c4a4f491c4049a9d2b38fd7786d0fd16a3a4979 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_black, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 02:14:23 np0005558317 systemd[1]: Started libpod-conmon-a656fe7e43aa080dc5035a558c4a4f491c4049a9d2b38fd7786d0fd16a3a4979.scope.
Dec 13 02:14:23 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:23 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1850988413cdb4e1dca8088a95e26d961e7d737719ec20915d39c82c2260b099/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:23 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1850988413cdb4e1dca8088a95e26d961e7d737719ec20915d39c82c2260b099/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:23 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1850988413cdb4e1dca8088a95e26d961e7d737719ec20915d39c82c2260b099/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:23 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1850988413cdb4e1dca8088a95e26d961e7d737719ec20915d39c82c2260b099/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:23 np0005558317 podman[89050]: 2025-12-13 07:14:23.186892277 +0000 UTC m=+0.091066040 container init a656fe7e43aa080dc5035a558c4a4f491c4049a9d2b38fd7786d0fd16a3a4979 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 02:14:23 np0005558317 podman[89050]: 2025-12-13 07:14:23.192285794 +0000 UTC m=+0.096459557 container start a656fe7e43aa080dc5035a558c4a4f491c4049a9d2b38fd7786d0fd16a3a4979 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 02:14:23 np0005558317 podman[89050]: 2025-12-13 07:14:23.193322012 +0000 UTC m=+0.097495775 container attach a656fe7e43aa080dc5035a558c4a4f491c4049a9d2b38fd7786d0fd16a3a4979 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 02:14:23 np0005558317 ceph-mgr[75200]: [devicehealth INFO root] creating main.db for devicehealth
Dec 13 02:14:23 np0005558317 podman[89050]: 2025-12-13 07:14:23.112658975 +0000 UTC m=+0.016832758 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:23 np0005558317 ceph-mgr[75200]: [devicehealth INFO root] Check health
Dec 13 02:14:23 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 13 02:14:23 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 13 02:14:23 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 13 02:14:23 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 13 02:14:23 np0005558317 ceph-mon[74928]: OSD bench result of 26410.155043 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 13 02:14:23 np0005558317 ceph-mon[74928]: osd.2 [v2:192.168.122.100:6810/2380119328,v1:192.168.122.100:6811/2380119328] boot
Dec 13 02:14:23 np0005558317 ceph-mon[74928]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 13 02:14:23 np0005558317 ceph-mon[74928]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 13 02:14:23 np0005558317 nervous_black[89063]: {
Dec 13 02:14:23 np0005558317 nervous_black[89063]:    "0": [
Dec 13 02:14:23 np0005558317 nervous_black[89063]:        {
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "devices": [
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "/dev/loop3"
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            ],
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "lv_name": "ceph_lv0",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "lv_size": "21470642176",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "name": "ceph_lv0",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "tags": {
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.cluster_name": "ceph",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.crush_device_class": "",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.encrypted": "0",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.objectstore": "bluestore",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.osd_id": "0",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.type": "block",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.vdo": "0",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.with_tpm": "0"
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            },
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "type": "block",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "vg_name": "ceph_vg0"
Dec 13 02:14:23 np0005558317 nervous_black[89063]:        }
Dec 13 02:14:23 np0005558317 nervous_black[89063]:    ],
Dec 13 02:14:23 np0005558317 nervous_black[89063]:    "1": [
Dec 13 02:14:23 np0005558317 nervous_black[89063]:        {
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "devices": [
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "/dev/loop4"
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            ],
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "lv_name": "ceph_lv1",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "lv_size": "21470642176",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "name": "ceph_lv1",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "tags": {
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.cluster_name": "ceph",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.crush_device_class": "",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.encrypted": "0",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.objectstore": "bluestore",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.osd_id": "1",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.type": "block",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.vdo": "0",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.with_tpm": "0"
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            },
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "type": "block",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "vg_name": "ceph_vg1"
Dec 13 02:14:23 np0005558317 nervous_black[89063]:        }
Dec 13 02:14:23 np0005558317 nervous_black[89063]:    ],
Dec 13 02:14:23 np0005558317 nervous_black[89063]:    "2": [
Dec 13 02:14:23 np0005558317 nervous_black[89063]:        {
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "devices": [
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "/dev/loop5"
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            ],
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "lv_name": "ceph_lv2",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "lv_size": "21470642176",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "name": "ceph_lv2",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "tags": {
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.cluster_name": "ceph",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.crush_device_class": "",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.encrypted": "0",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.objectstore": "bluestore",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.osd_id": "2",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.type": "block",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.vdo": "0",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:                "ceph.with_tpm": "0"
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            },
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "type": "block",
Dec 13 02:14:23 np0005558317 nervous_black[89063]:            "vg_name": "ceph_vg2"
Dec 13 02:14:23 np0005558317 nervous_black[89063]:        }
Dec 13 02:14:23 np0005558317 nervous_black[89063]:    ]
Dec 13 02:14:23 np0005558317 nervous_black[89063]: }
Dec 13 02:14:23 np0005558317 systemd[1]: libpod-a656fe7e43aa080dc5035a558c4a4f491c4049a9d2b38fd7786d0fd16a3a4979.scope: Deactivated successfully.
Dec 13 02:14:23 np0005558317 podman[89087]: 2025-12-13 07:14:23.467776433 +0000 UTC m=+0.017718704 container died a656fe7e43aa080dc5035a558c4a4f491c4049a9d2b38fd7786d0fd16a3a4979 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_black, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 02:14:23 np0005558317 systemd[1]: var-lib-containers-storage-overlay-1850988413cdb4e1dca8088a95e26d961e7d737719ec20915d39c82c2260b099-merged.mount: Deactivated successfully.
Dec 13 02:14:23 np0005558317 podman[89087]: 2025-12-13 07:14:23.488046881 +0000 UTC m=+0.037989132 container remove a656fe7e43aa080dc5035a558c4a4f491c4049a9d2b38fd7786d0fd16a3a4979 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_black, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:23 np0005558317 systemd[1]: libpod-conmon-a656fe7e43aa080dc5035a558c4a4f491c4049a9d2b38fd7786d0fd16a3a4979.scope: Deactivated successfully.
Dec 13 02:14:23 np0005558317 podman[89159]: 2025-12-13 07:14:23.831605655 +0000 UTC m=+0.028791975 container create e3d7a41e2b73d8e771775f331a60c383bce470e43881da06e942f13d7a5d9f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_montalcini, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 02:14:23 np0005558317 systemd[1]: Started libpod-conmon-e3d7a41e2b73d8e771775f331a60c383bce470e43881da06e942f13d7a5d9f57.scope.
Dec 13 02:14:23 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:23 np0005558317 podman[89159]: 2025-12-13 07:14:23.89052272 +0000 UTC m=+0.087709061 container init e3d7a41e2b73d8e771775f331a60c383bce470e43881da06e942f13d7a5d9f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_montalcini, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 02:14:23 np0005558317 podman[89159]: 2025-12-13 07:14:23.896012838 +0000 UTC m=+0.093199160 container start e3d7a41e2b73d8e771775f331a60c383bce470e43881da06e942f13d7a5d9f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_montalcini, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:23 np0005558317 podman[89159]: 2025-12-13 07:14:23.897378215 +0000 UTC m=+0.094564556 container attach e3d7a41e2b73d8e771775f331a60c383bce470e43881da06e942f13d7a5d9f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_montalcini, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 02:14:23 np0005558317 relaxed_montalcini[89172]: 167 167
Dec 13 02:14:23 np0005558317 systemd[1]: libpod-e3d7a41e2b73d8e771775f331a60c383bce470e43881da06e942f13d7a5d9f57.scope: Deactivated successfully.
Dec 13 02:14:23 np0005558317 podman[89159]: 2025-12-13 07:14:23.900112424 +0000 UTC m=+0.097298745 container died e3d7a41e2b73d8e771775f331a60c383bce470e43881da06e942f13d7a5d9f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 02:14:23 np0005558317 podman[89159]: 2025-12-13 07:14:23.819597385 +0000 UTC m=+0.016783726 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:23 np0005558317 podman[89159]: 2025-12-13 07:14:23.91861017 +0000 UTC m=+0.115796492 container remove e3d7a41e2b73d8e771775f331a60c383bce470e43881da06e942f13d7a5d9f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_montalcini, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 02:14:23 np0005558317 systemd[1]: var-lib-containers-storage-overlay-3a200cc219a7b0b34a2fdb631046a5bf4f13ddc9265ff69e2903d34bad7dd913-merged.mount: Deactivated successfully.
Dec 13 02:14:23 np0005558317 systemd[1]: libpod-conmon-e3d7a41e2b73d8e771775f331a60c383bce470e43881da06e942f13d7a5d9f57.scope: Deactivated successfully.
Dec 13 02:14:24 np0005558317 podman[89194]: 2025-12-13 07:14:24.036363168 +0000 UTC m=+0.028681849 container create 1982c46dcb91f5e361ddf958a3d25d471b5c727a13991ee05ea161970f4bd59e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_hugle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:24 np0005558317 systemd[1]: Started libpod-conmon-1982c46dcb91f5e361ddf958a3d25d471b5c727a13991ee05ea161970f4bd59e.scope.
Dec 13 02:14:24 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:24 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43a43f7adaafb9528fd0f1d7970bdf88ecb17bce41ad69c547b7092f6931814a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:24 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43a43f7adaafb9528fd0f1d7970bdf88ecb17bce41ad69c547b7092f6931814a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:24 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43a43f7adaafb9528fd0f1d7970bdf88ecb17bce41ad69c547b7092f6931814a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:24 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43a43f7adaafb9528fd0f1d7970bdf88ecb17bce41ad69c547b7092f6931814a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:24 np0005558317 podman[89194]: 2025-12-13 07:14:24.093765697 +0000 UTC m=+0.086084387 container init 1982c46dcb91f5e361ddf958a3d25d471b5c727a13991ee05ea161970f4bd59e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_hugle, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:24 np0005558317 podman[89194]: 2025-12-13 07:14:24.100893785 +0000 UTC m=+0.093212475 container start 1982c46dcb91f5e361ddf958a3d25d471b5c727a13991ee05ea161970f4bd59e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_hugle, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:24 np0005558317 podman[89194]: 2025-12-13 07:14:24.102109139 +0000 UTC m=+0.094427819 container attach 1982c46dcb91f5e361ddf958a3d25d471b5c727a13991ee05ea161970f4bd59e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_hugle, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 02:14:24 np0005558317 podman[89194]: 2025-12-13 07:14:24.024848767 +0000 UTC m=+0.017167457 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:24 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v29: 1 pgs: 1 creating+peering; 0 B data, 879 MiB used, 59 GiB / 60 GiB avail
Dec 13 02:14:24 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec 13 02:14:24 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.qsherl(active, since 46s)
Dec 13 02:14:24 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Dec 13 02:14:24 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Dec 13 02:14:24 np0005558317 lvm[89285]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:14:24 np0005558317 lvm[89284]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:14:24 np0005558317 lvm[89284]: VG ceph_vg0 finished
Dec 13 02:14:24 np0005558317 lvm[89285]: VG ceph_vg1 finished
Dec 13 02:14:24 np0005558317 lvm[89288]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:14:24 np0005558317 lvm[89288]: VG ceph_vg2 finished
Dec 13 02:14:24 np0005558317 priceless_hugle[89207]: {}
Dec 13 02:14:24 np0005558317 systemd[1]: libpod-1982c46dcb91f5e361ddf958a3d25d471b5c727a13991ee05ea161970f4bd59e.scope: Deactivated successfully.
Dec 13 02:14:24 np0005558317 podman[89194]: 2025-12-13 07:14:24.745906876 +0000 UTC m=+0.738225566 container died 1982c46dcb91f5e361ddf958a3d25d471b5c727a13991ee05ea161970f4bd59e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_hugle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 02:14:24 np0005558317 systemd[1]: var-lib-containers-storage-overlay-43a43f7adaafb9528fd0f1d7970bdf88ecb17bce41ad69c547b7092f6931814a-merged.mount: Deactivated successfully.
Dec 13 02:14:24 np0005558317 podman[89194]: 2025-12-13 07:14:24.770124042 +0000 UTC m=+0.762442722 container remove 1982c46dcb91f5e361ddf958a3d25d471b5c727a13991ee05ea161970f4bd59e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 02:14:24 np0005558317 systemd[1]: libpod-conmon-1982c46dcb91f5e361ddf958a3d25d471b5c727a13991ee05ea161970f4bd59e.scope: Deactivated successfully.
Dec 13 02:14:24 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:14:24 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:24 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:14:24 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:25 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:25 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:26 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v31: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:14:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:14:28 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v32: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:14:30 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v33: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:14:32 np0005558317 python3[89349]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:32 np0005558317 podman[89351]: 2025-12-13 07:14:32.097564653 +0000 UTC m=+0.028045733 container create 2579e970aaad26823417682fcaa9702df67c425ffa752a9b6a2357b0fb8c292f (image=quay.io/ceph/ceph:v20, name=dazzling_ellis, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:32 np0005558317 systemd[1]: Started libpod-conmon-2579e970aaad26823417682fcaa9702df67c425ffa752a9b6a2357b0fb8c292f.scope.
Dec 13 02:14:32 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:32 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b4addaf2f58c21e4a038c87e13d5f812e5dffb449bf7d7af8f0f51d99536d9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:32 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b4addaf2f58c21e4a038c87e13d5f812e5dffb449bf7d7af8f0f51d99536d9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:32 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b4addaf2f58c21e4a038c87e13d5f812e5dffb449bf7d7af8f0f51d99536d9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:32 np0005558317 podman[89351]: 2025-12-13 07:14:32.151058957 +0000 UTC m=+0.081540037 container init 2579e970aaad26823417682fcaa9702df67c425ffa752a9b6a2357b0fb8c292f (image=quay.io/ceph/ceph:v20, name=dazzling_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 02:14:32 np0005558317 podman[89351]: 2025-12-13 07:14:32.157135828 +0000 UTC m=+0.087616908 container start 2579e970aaad26823417682fcaa9702df67c425ffa752a9b6a2357b0fb8c292f (image=quay.io/ceph/ceph:v20, name=dazzling_ellis, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:32 np0005558317 podman[89351]: 2025-12-13 07:14:32.15814765 +0000 UTC m=+0.088628730 container attach 2579e970aaad26823417682fcaa9702df67c425ffa752a9b6a2357b0fb8c292f (image=quay.io/ceph/ceph:v20, name=dazzling_ellis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 02:14:32 np0005558317 podman[89351]: 2025-12-13 07:14:32.086707598 +0000 UTC m=+0.017188698 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:32 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:14:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 13 02:14:32 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/520977248' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 13 02:14:32 np0005558317 dazzling_ellis[89364]: 
Dec 13 02:14:32 np0005558317 dazzling_ellis[89364]: {"fsid":"00fdae1b-7fad-5f1b-8734-ba4d9298a6de","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":69,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":16,"num_osds":3,"num_up_osds":3,"osd_up_since":1765610062,"num_in_osds":3,"osd_in_since":1765610047,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":502935552,"bytes_avail":63908990976,"bytes_total":64411926528},"fsmap":{"epoch":1,"btime":"2025-12-13T07:13:21:319345+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-13T07:13:21.320643+0000","services":{}},"progress_events":{}}
Dec 13 02:14:32 np0005558317 systemd[1]: libpod-2579e970aaad26823417682fcaa9702df67c425ffa752a9b6a2357b0fb8c292f.scope: Deactivated successfully.
Dec 13 02:14:32 np0005558317 podman[89351]: 2025-12-13 07:14:32.558694553 +0000 UTC m=+0.489175644 container died 2579e970aaad26823417682fcaa9702df67c425ffa752a9b6a2357b0fb8c292f (image=quay.io/ceph/ceph:v20, name=dazzling_ellis, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 02:14:32 np0005558317 systemd[1]: var-lib-containers-storage-overlay-d2b4addaf2f58c21e4a038c87e13d5f812e5dffb449bf7d7af8f0f51d99536d9-merged.mount: Deactivated successfully.
Dec 13 02:14:32 np0005558317 podman[89351]: 2025-12-13 07:14:32.579558336 +0000 UTC m=+0.510039417 container remove 2579e970aaad26823417682fcaa9702df67c425ffa752a9b6a2357b0fb8c292f (image=quay.io/ceph/ceph:v20, name=dazzling_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 02:14:32 np0005558317 systemd[1]: libpod-conmon-2579e970aaad26823417682fcaa9702df67c425ffa752a9b6a2357b0fb8c292f.scope: Deactivated successfully.
Dec 13 02:14:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:14:32 np0005558317 python3[89423]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:32 np0005558317 podman[89424]: 2025-12-13 07:14:32.979122013 +0000 UTC m=+0.030890700 container create 4b25ada4cbe0bf391cd078b863636dee15f410b7189a2cedf1e07dbcefc12c25 (image=quay.io/ceph/ceph:v20, name=pedantic_chatterjee, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:33 np0005558317 systemd[1]: Started libpod-conmon-4b25ada4cbe0bf391cd078b863636dee15f410b7189a2cedf1e07dbcefc12c25.scope.
Dec 13 02:14:33 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:33 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b4b3b0d99c81aff7dca681f98332f9814f628e47aa3b368079aa704c4053938/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:33 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b4b3b0d99c81aff7dca681f98332f9814f628e47aa3b368079aa704c4053938/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:33 np0005558317 podman[89424]: 2025-12-13 07:14:33.029284763 +0000 UTC m=+0.081053452 container init 4b25ada4cbe0bf391cd078b863636dee15f410b7189a2cedf1e07dbcefc12c25 (image=quay.io/ceph/ceph:v20, name=pedantic_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:33 np0005558317 podman[89424]: 2025-12-13 07:14:33.033802715 +0000 UTC m=+0.085571404 container start 4b25ada4cbe0bf391cd078b863636dee15f410b7189a2cedf1e07dbcefc12c25 (image=quay.io/ceph/ceph:v20, name=pedantic_chatterjee, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:33 np0005558317 podman[89424]: 2025-12-13 07:14:33.035005626 +0000 UTC m=+0.086774314 container attach 4b25ada4cbe0bf391cd078b863636dee15f410b7189a2cedf1e07dbcefc12c25 (image=quay.io/ceph/ceph:v20, name=pedantic_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:33 np0005558317 podman[89424]: 2025-12-13 07:14:32.968507772 +0000 UTC m=+0.020276470 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:33 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 13 02:14:33 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1627129991' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 02:14:33 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec 13 02:14:33 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/1627129991' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 02:14:33 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1627129991' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 02:14:33 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Dec 13 02:14:33 np0005558317 pedantic_chatterjee[89436]: pool 'vms' created
Dec 13 02:14:33 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Dec 13 02:14:33 np0005558317 systemd[1]: libpod-4b25ada4cbe0bf391cd078b863636dee15f410b7189a2cedf1e07dbcefc12c25.scope: Deactivated successfully.
Dec 13 02:14:33 np0005558317 conmon[89436]: conmon 4b25ada4cbe0bf391cd0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4b25ada4cbe0bf391cd078b863636dee15f410b7189a2cedf1e07dbcefc12c25.scope/container/memory.events
Dec 13 02:14:33 np0005558317 podman[89424]: 2025-12-13 07:14:33.424399186 +0000 UTC m=+0.476167874 container died 4b25ada4cbe0bf391cd078b863636dee15f410b7189a2cedf1e07dbcefc12c25 (image=quay.io/ceph/ceph:v20, name=pedantic_chatterjee, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:33 np0005558317 systemd[1]: var-lib-containers-storage-overlay-3b4b3b0d99c81aff7dca681f98332f9814f628e47aa3b368079aa704c4053938-merged.mount: Deactivated successfully.
Dec 13 02:14:33 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:33 np0005558317 podman[89424]: 2025-12-13 07:14:33.44395923 +0000 UTC m=+0.495727918 container remove 4b25ada4cbe0bf391cd078b863636dee15f410b7189a2cedf1e07dbcefc12c25 (image=quay.io/ceph/ceph:v20, name=pedantic_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 02:14:33 np0005558317 systemd[1]: libpod-conmon-4b25ada4cbe0bf391cd078b863636dee15f410b7189a2cedf1e07dbcefc12c25.scope: Deactivated successfully.
Dec 13 02:14:33 np0005558317 python3[89497]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:33 np0005558317 podman[89498]: 2025-12-13 07:14:33.699189038 +0000 UTC m=+0.026360756 container create bb91b3475c91c5a16f5730ba8a374632d23ff6df6948e8bc52f7a49d56fe3349 (image=quay.io/ceph/ceph:v20, name=xenodochial_maxwell, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:33 np0005558317 systemd[1]: Started libpod-conmon-bb91b3475c91c5a16f5730ba8a374632d23ff6df6948e8bc52f7a49d56fe3349.scope.
Dec 13 02:14:33 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:33 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2412ac98e034676ee508b9475961ecc707091a91bffe7a6a754c3d4fa0fd86e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:33 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2412ac98e034676ee508b9475961ecc707091a91bffe7a6a754c3d4fa0fd86e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:33 np0005558317 podman[89498]: 2025-12-13 07:14:33.748951827 +0000 UTC m=+0.076123545 container init bb91b3475c91c5a16f5730ba8a374632d23ff6df6948e8bc52f7a49d56fe3349 (image=quay.io/ceph/ceph:v20, name=xenodochial_maxwell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:33 np0005558317 podman[89498]: 2025-12-13 07:14:33.752984337 +0000 UTC m=+0.080156044 container start bb91b3475c91c5a16f5730ba8a374632d23ff6df6948e8bc52f7a49d56fe3349 (image=quay.io/ceph/ceph:v20, name=xenodochial_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:33 np0005558317 podman[89498]: 2025-12-13 07:14:33.754115473 +0000 UTC m=+0.081287181 container attach bb91b3475c91c5a16f5730ba8a374632d23ff6df6948e8bc52f7a49d56fe3349 (image=quay.io/ceph/ceph:v20, name=xenodochial_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 02:14:33 np0005558317 podman[89498]: 2025-12-13 07:14:33.688913595 +0000 UTC m=+0.016085323 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:34 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 13 02:14:34 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2508690959' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 02:14:34 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v36: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:14:34 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec 13 02:14:34 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2508690959' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 02:14:34 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Dec 13 02:14:34 np0005558317 xenodochial_maxwell[89510]: pool 'volumes' created
Dec 13 02:14:34 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Dec 13 02:14:34 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 18 pg[3.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:34 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/1627129991' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 02:14:34 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/2508690959' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 02:14:34 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:34 np0005558317 systemd[1]: libpod-bb91b3475c91c5a16f5730ba8a374632d23ff6df6948e8bc52f7a49d56fe3349.scope: Deactivated successfully.
Dec 13 02:14:34 np0005558317 podman[89498]: 2025-12-13 07:14:34.428710734 +0000 UTC m=+0.755882452 container died bb91b3475c91c5a16f5730ba8a374632d23ff6df6948e8bc52f7a49d56fe3349 (image=quay.io/ceph/ceph:v20, name=xenodochial_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 02:14:34 np0005558317 systemd[1]: var-lib-containers-storage-overlay-f2412ac98e034676ee508b9475961ecc707091a91bffe7a6a754c3d4fa0fd86e-merged.mount: Deactivated successfully.
Dec 13 02:14:34 np0005558317 podman[89498]: 2025-12-13 07:14:34.447999737 +0000 UTC m=+0.775171445 container remove bb91b3475c91c5a16f5730ba8a374632d23ff6df6948e8bc52f7a49d56fe3349 (image=quay.io/ceph/ceph:v20, name=xenodochial_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:34 np0005558317 systemd[1]: libpod-conmon-bb91b3475c91c5a16f5730ba8a374632d23ff6df6948e8bc52f7a49d56fe3349.scope: Deactivated successfully.
Dec 13 02:14:34 np0005558317 python3[89571]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:34 np0005558317 podman[89572]: 2025-12-13 07:14:34.702414783 +0000 UTC m=+0.029594093 container create 498da08979393cb4104be64c3e078ba0fd547d3efa84f45be086ea79248804d1 (image=quay.io/ceph/ceph:v20, name=angry_goldstine, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:34 np0005558317 systemd[1]: Started libpod-conmon-498da08979393cb4104be64c3e078ba0fd547d3efa84f45be086ea79248804d1.scope.
Dec 13 02:14:34 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:34 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2858ab58be0a1f66ec601d1b31c498137e8712d66d210a06cda28af62f6f59/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:34 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2858ab58be0a1f66ec601d1b31c498137e8712d66d210a06cda28af62f6f59/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:34 np0005558317 podman[89572]: 2025-12-13 07:14:34.762516095 +0000 UTC m=+0.089695405 container init 498da08979393cb4104be64c3e078ba0fd547d3efa84f45be086ea79248804d1 (image=quay.io/ceph/ceph:v20, name=angry_goldstine, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 02:14:34 np0005558317 podman[89572]: 2025-12-13 07:14:34.766787995 +0000 UTC m=+0.093967294 container start 498da08979393cb4104be64c3e078ba0fd547d3efa84f45be086ea79248804d1 (image=quay.io/ceph/ceph:v20, name=angry_goldstine, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:34 np0005558317 podman[89572]: 2025-12-13 07:14:34.767841334 +0000 UTC m=+0.095020634 container attach 498da08979393cb4104be64c3e078ba0fd547d3efa84f45be086ea79248804d1 (image=quay.io/ceph/ceph:v20, name=angry_goldstine, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:34 np0005558317 podman[89572]: 2025-12-13 07:14:34.690328187 +0000 UTC m=+0.017507487 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:35 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 13 02:14:35 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1452164855' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 02:14:35 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec 13 02:14:35 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1452164855' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 02:14:35 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Dec 13 02:14:35 np0005558317 angry_goldstine[89584]: pool 'backups' created
Dec 13 02:14:35 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Dec 13 02:14:35 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/2508690959' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 02:14:35 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/1452164855' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 02:14:35 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/1452164855' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 02:14:35 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 19 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:35 np0005558317 systemd[1]: libpod-498da08979393cb4104be64c3e078ba0fd547d3efa84f45be086ea79248804d1.scope: Deactivated successfully.
Dec 13 02:14:35 np0005558317 podman[89572]: 2025-12-13 07:14:35.430947681 +0000 UTC m=+0.758126971 container died 498da08979393cb4104be64c3e078ba0fd547d3efa84f45be086ea79248804d1 (image=quay.io/ceph/ceph:v20, name=angry_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:35 np0005558317 systemd[1]: var-lib-containers-storage-overlay-ec2858ab58be0a1f66ec601d1b31c498137e8712d66d210a06cda28af62f6f59-merged.mount: Deactivated successfully.
Dec 13 02:14:35 np0005558317 podman[89572]: 2025-12-13 07:14:35.448111811 +0000 UTC m=+0.775291111 container remove 498da08979393cb4104be64c3e078ba0fd547d3efa84f45be086ea79248804d1 (image=quay.io/ceph/ceph:v20, name=angry_goldstine, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 02:14:35 np0005558317 systemd[1]: libpod-conmon-498da08979393cb4104be64c3e078ba0fd547d3efa84f45be086ea79248804d1.scope: Deactivated successfully.
Dec 13 02:14:35 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:35 np0005558317 python3[89646]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:35 np0005558317 podman[89647]: 2025-12-13 07:14:35.701165359 +0000 UTC m=+0.027985741 container create 284be59bf7ecb16e0c5c3b6e1765c6bf129f5ee12bd6ee98d88431958821f29e (image=quay.io/ceph/ceph:v20, name=ecstatic_lamport, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:35 np0005558317 systemd[1]: Started libpod-conmon-284be59bf7ecb16e0c5c3b6e1765c6bf129f5ee12bd6ee98d88431958821f29e.scope.
Dec 13 02:14:35 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:35 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19785ea1c96cbb045bb91b6890c6176886e0b919ef7edb32ced8108f0f64a9b9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:35 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19785ea1c96cbb045bb91b6890c6176886e0b919ef7edb32ced8108f0f64a9b9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:35 np0005558317 podman[89647]: 2025-12-13 07:14:35.757619785 +0000 UTC m=+0.084440167 container init 284be59bf7ecb16e0c5c3b6e1765c6bf129f5ee12bd6ee98d88431958821f29e (image=quay.io/ceph/ceph:v20, name=ecstatic_lamport, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 02:14:35 np0005558317 podman[89647]: 2025-12-13 07:14:35.762520417 +0000 UTC m=+0.089340809 container start 284be59bf7ecb16e0c5c3b6e1765c6bf129f5ee12bd6ee98d88431958821f29e (image=quay.io/ceph/ceph:v20, name=ecstatic_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:35 np0005558317 podman[89647]: 2025-12-13 07:14:35.763548368 +0000 UTC m=+0.090368750 container attach 284be59bf7ecb16e0c5c3b6e1765c6bf129f5ee12bd6ee98d88431958821f29e (image=quay.io/ceph/ceph:v20, name=ecstatic_lamport, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 02:14:35 np0005558317 podman[89647]: 2025-12-13 07:14:35.690345282 +0000 UTC m=+0.017165664 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:36 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 13 02:14:36 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3374455308' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 02:14:36 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v39: 4 pgs: 1 creating+peering, 1 active+clean, 2 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:14:36 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec 13 02:14:36 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/3374455308' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 02:14:36 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3374455308' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 02:14:36 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Dec 13 02:14:36 np0005558317 ecstatic_lamport[89659]: pool 'images' created
Dec 13 02:14:36 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Dec 13 02:14:36 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:36 np0005558317 systemd[1]: libpod-284be59bf7ecb16e0c5c3b6e1765c6bf129f5ee12bd6ee98d88431958821f29e.scope: Deactivated successfully.
Dec 13 02:14:36 np0005558317 podman[89647]: 2025-12-13 07:14:36.438641774 +0000 UTC m=+0.765462157 container died 284be59bf7ecb16e0c5c3b6e1765c6bf129f5ee12bd6ee98d88431958821f29e (image=quay.io/ceph/ceph:v20, name=ecstatic_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:36 np0005558317 systemd[1]: var-lib-containers-storage-overlay-19785ea1c96cbb045bb91b6890c6176886e0b919ef7edb32ced8108f0f64a9b9-merged.mount: Deactivated successfully.
Dec 13 02:14:36 np0005558317 podman[89647]: 2025-12-13 07:14:36.45434581 +0000 UTC m=+0.781166183 container remove 284be59bf7ecb16e0c5c3b6e1765c6bf129f5ee12bd6ee98d88431958821f29e (image=quay.io/ceph/ceph:v20, name=ecstatic_lamport, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 02:14:36 np0005558317 systemd[1]: libpod-conmon-284be59bf7ecb16e0c5c3b6e1765c6bf129f5ee12bd6ee98d88431958821f29e.scope: Deactivated successfully.
Dec 13 02:14:36 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:36 np0005558317 python3[89721]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:36 np0005558317 podman[89722]: 2025-12-13 07:14:36.712186276 +0000 UTC m=+0.025871116 container create 405b07271b4ca84efb676cc76f6644929b99da7af8415db77eda15df652b733b (image=quay.io/ceph/ceph:v20, name=keen_knuth, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:36 np0005558317 systemd[1]: Started libpod-conmon-405b07271b4ca84efb676cc76f6644929b99da7af8415db77eda15df652b733b.scope.
Dec 13 02:14:36 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:36 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552e722051a369f549e5d7535eb8c2fdbd5e793ac08331177c52f102003e3f18/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:36 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552e722051a369f549e5d7535eb8c2fdbd5e793ac08331177c52f102003e3f18/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:36 np0005558317 podman[89722]: 2025-12-13 07:14:36.763368553 +0000 UTC m=+0.077053393 container init 405b07271b4ca84efb676cc76f6644929b99da7af8415db77eda15df652b733b (image=quay.io/ceph/ceph:v20, name=keen_knuth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:36 np0005558317 podman[89722]: 2025-12-13 07:14:36.767177542 +0000 UTC m=+0.080862383 container start 405b07271b4ca84efb676cc76f6644929b99da7af8415db77eda15df652b733b (image=quay.io/ceph/ceph:v20, name=keen_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Dec 13 02:14:36 np0005558317 podman[89722]: 2025-12-13 07:14:36.768198201 +0000 UTC m=+0.081883042 container attach 405b07271b4ca84efb676cc76f6644929b99da7af8415db77eda15df652b733b (image=quay.io/ceph/ceph:v20, name=keen_knuth, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 02:14:36 np0005558317 podman[89722]: 2025-12-13 07:14:36.702042019 +0000 UTC m=+0.015726880 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 13 02:14:37 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1251626788' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 02:14:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec 13 02:14:37 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1251626788' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 02:14:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Dec 13 02:14:37 np0005558317 keen_knuth[89734]: pool 'cephfs.cephfs.meta' created
Dec 13 02:14:37 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Dec 13 02:14:37 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 21 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:37 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/3374455308' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 02:14:37 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/1251626788' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 02:14:37 np0005558317 systemd[1]: libpod-405b07271b4ca84efb676cc76f6644929b99da7af8415db77eda15df652b733b.scope: Deactivated successfully.
Dec 13 02:14:37 np0005558317 conmon[89734]: conmon 405b07271b4ca84efb67 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-405b07271b4ca84efb676cc76f6644929b99da7af8415db77eda15df652b733b.scope/container/memory.events
Dec 13 02:14:37 np0005558317 podman[89722]: 2025-12-13 07:14:37.443768935 +0000 UTC m=+0.757453775 container died 405b07271b4ca84efb676cc76f6644929b99da7af8415db77eda15df652b733b (image=quay.io/ceph/ceph:v20, name=keen_knuth, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 02:14:37 np0005558317 systemd[1]: var-lib-containers-storage-overlay-552e722051a369f549e5d7535eb8c2fdbd5e793ac08331177c52f102003e3f18-merged.mount: Deactivated successfully.
Dec 13 02:14:37 np0005558317 podman[89722]: 2025-12-13 07:14:37.462482477 +0000 UTC m=+0.776167318 container remove 405b07271b4ca84efb676cc76f6644929b99da7af8415db77eda15df652b733b (image=quay.io/ceph/ceph:v20, name=keen_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:37 np0005558317 systemd[1]: libpod-conmon-405b07271b4ca84efb676cc76f6644929b99da7af8415db77eda15df652b733b.scope: Deactivated successfully.
Dec 13 02:14:37 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:37 np0005558317 python3[89797]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e21 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:14:37 np0005558317 podman[89798]: 2025-12-13 07:14:37.718139478 +0000 UTC m=+0.027889450 container create 2ed195410360f11343614c280fe56d3a90316c83bfb5a17c4a9517415d2cfb1a (image=quay.io/ceph/ceph:v20, name=youthful_buck, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:37 np0005558317 systemd[1]: Started libpod-conmon-2ed195410360f11343614c280fe56d3a90316c83bfb5a17c4a9517415d2cfb1a.scope.
Dec 13 02:14:37 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:37 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79858012b8f09bc143436e83dcd7139c918ee35d869b81d016329c0b112e9f5d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:37 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79858012b8f09bc143436e83dcd7139c918ee35d869b81d016329c0b112e9f5d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:37 np0005558317 podman[89798]: 2025-12-13 07:14:37.771492285 +0000 UTC m=+0.081242278 container init 2ed195410360f11343614c280fe56d3a90316c83bfb5a17c4a9517415d2cfb1a (image=quay.io/ceph/ceph:v20, name=youthful_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 02:14:37 np0005558317 podman[89798]: 2025-12-13 07:14:37.781302674 +0000 UTC m=+0.091052646 container start 2ed195410360f11343614c280fe56d3a90316c83bfb5a17c4a9517415d2cfb1a (image=quay.io/ceph/ceph:v20, name=youthful_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:14:37 np0005558317 podman[89798]: 2025-12-13 07:14:37.784215409 +0000 UTC m=+0.093965380 container attach 2ed195410360f11343614c280fe56d3a90316c83bfb5a17c4a9517415d2cfb1a (image=quay.io/ceph/ceph:v20, name=youthful_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Dec 13 02:14:37 np0005558317 podman[89798]: 2025-12-13 07:14:37.706629495 +0000 UTC m=+0.016379487 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:38 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 13 02:14:38 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2902257096' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 02:14:38 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v42: 6 pgs: 1 creating+peering, 2 active+clean, 3 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:14:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:14:38
Dec 13 02:14:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:14:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Some PGs (0.500000) are unknown; try again later
Dec 13 02:14:38 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec 13 02:14:38 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2902257096' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 02:14:38 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Dec 13 02:14:38 np0005558317 youthful_buck[89811]: pool 'cephfs.cephfs.data' created
Dec 13 02:14:38 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Dec 13 02:14:38 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:38 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/1251626788' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 02:14:38 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/2902257096' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 02:14:38 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:38 np0005558317 systemd[1]: libpod-2ed195410360f11343614c280fe56d3a90316c83bfb5a17c4a9517415d2cfb1a.scope: Deactivated successfully.
Dec 13 02:14:38 np0005558317 podman[89798]: 2025-12-13 07:14:38.460036533 +0000 UTC m=+0.769786505 container died 2ed195410360f11343614c280fe56d3a90316c83bfb5a17c4a9517415d2cfb1a (image=quay.io/ceph/ceph:v20, name=youthful_buck, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 02:14:38 np0005558317 systemd[1]: var-lib-containers-storage-overlay-79858012b8f09bc143436e83dcd7139c918ee35d869b81d016329c0b112e9f5d-merged.mount: Deactivated successfully.
Dec 13 02:14:38 np0005558317 podman[89798]: 2025-12-13 07:14:38.482326166 +0000 UTC m=+0.792076139 container remove 2ed195410360f11343614c280fe56d3a90316c83bfb5a17c4a9517415d2cfb1a (image=quay.io/ceph/ceph:v20, name=youthful_buck, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:14:38 np0005558317 systemd[1]: libpod-conmon-2ed195410360f11343614c280fe56d3a90316c83bfb5a17c4a9517415d2cfb1a.scope: Deactivated successfully.
Dec 13 02:14:38 np0005558317 python3[89873]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:38 np0005558317 podman[89874]: 2025-12-13 07:14:38.764160607 +0000 UTC m=+0.026957588 container create 2e98911228065af35e96bd97e53842dea5e09f6c8b06905558c0f30732a24e88 (image=quay.io/ceph/ceph:v20, name=nifty_kapitsa, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 02:14:38 np0005558317 systemd[1]: Started libpod-conmon-2e98911228065af35e96bd97e53842dea5e09f6c8b06905558c0f30732a24e88.scope.
Dec 13 02:14:38 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:38 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9c6e9760e05eb70600c53bad02deb0155b7dea505b83fb76134e1891f743163/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:38 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9c6e9760e05eb70600c53bad02deb0155b7dea505b83fb76134e1891f743163/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:38 np0005558317 podman[89874]: 2025-12-13 07:14:38.827237983 +0000 UTC m=+0.090034982 container init 2e98911228065af35e96bd97e53842dea5e09f6c8b06905558c0f30732a24e88 (image=quay.io/ceph/ceph:v20, name=nifty_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:38 np0005558317 podman[89874]: 2025-12-13 07:14:38.831379928 +0000 UTC m=+0.094176907 container start 2e98911228065af35e96bd97e53842dea5e09f6c8b06905558c0f30732a24e88 (image=quay.io/ceph/ceph:v20, name=nifty_kapitsa, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:38 np0005558317 podman[89874]: 2025-12-13 07:14:38.832479514 +0000 UTC m=+0.095276494 container attach 2e98911228065af35e96bd97e53842dea5e09f6c8b06905558c0f30732a24e88 (image=quay.io/ceph/ceph:v20, name=nifty_kapitsa, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 02:14:38 np0005558317 podman[89874]: 2025-12-13 07:14:38.75374899 +0000 UTC m=+0.016545989 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:39 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:14:39 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:14:39 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:14:39 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:14:39 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 02:14:39 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:14:39 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 02:14:39 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:14:39 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 02:14:39 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:14:39 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 02:14:39 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:14:39 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 02:14:39 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:14:39 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 02:14:39 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Dec 13 02:14:39 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:14:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:14:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:14:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:14:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:14:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:14:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:14:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:14:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:14:39 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Dec 13 02:14:39 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2315924078' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Dec 13 02:14:39 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec 13 02:14:39 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:14:39 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2315924078' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 13 02:14:39 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Dec 13 02:14:39 np0005558317 nifty_kapitsa[89887]: enabled application 'rbd' on pool 'vms'
Dec 13 02:14:39 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Dec 13 02:14:39 np0005558317 ceph-mgr[75200]: [progress INFO root] update: starting ev 3d02a20f-ab72-4b9e-89db-75b6b986c2c7 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 13 02:14:39 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Dec 13 02:14:39 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:14:39 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:39 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/2902257096' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 02:14:39 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:14:39 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/2315924078' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Dec 13 02:14:39 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:14:39 np0005558317 systemd[1]: libpod-2e98911228065af35e96bd97e53842dea5e09f6c8b06905558c0f30732a24e88.scope: Deactivated successfully.
Dec 13 02:14:39 np0005558317 podman[89874]: 2025-12-13 07:14:39.461371073 +0000 UTC m=+0.724168053 container died 2e98911228065af35e96bd97e53842dea5e09f6c8b06905558c0f30732a24e88 (image=quay.io/ceph/ceph:v20, name=nifty_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:39 np0005558317 systemd[1]: var-lib-containers-storage-overlay-e9c6e9760e05eb70600c53bad02deb0155b7dea505b83fb76134e1891f743163-merged.mount: Deactivated successfully.
Dec 13 02:14:39 np0005558317 podman[89874]: 2025-12-13 07:14:39.479540434 +0000 UTC m=+0.742337413 container remove 2e98911228065af35e96bd97e53842dea5e09f6c8b06905558c0f30732a24e88 (image=quay.io/ceph/ceph:v20, name=nifty_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 02:14:39 np0005558317 systemd[1]: libpod-conmon-2e98911228065af35e96bd97e53842dea5e09f6c8b06905558c0f30732a24e88.scope: Deactivated successfully.
Dec 13 02:14:39 np0005558317 python3[89947]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:39 np0005558317 podman[89948]: 2025-12-13 07:14:39.734285751 +0000 UTC m=+0.028089667 container create 2bcd13a6956644982eb6a40fa78c028b319da91aad15a2c2cf23bb0db16a488b (image=quay.io/ceph/ceph:v20, name=ecstatic_allen, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:39 np0005558317 systemd[1]: Started libpod-conmon-2bcd13a6956644982eb6a40fa78c028b319da91aad15a2c2cf23bb0db16a488b.scope.
Dec 13 02:14:39 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:39 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3963b6b96fb103b955b721fe7c90da14278ab054b9f24f6879fa55de43ef129/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:39 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3963b6b96fb103b955b721fe7c90da14278ab054b9f24f6879fa55de43ef129/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:39 np0005558317 podman[89948]: 2025-12-13 07:14:39.777921064 +0000 UTC m=+0.071725010 container init 2bcd13a6956644982eb6a40fa78c028b319da91aad15a2c2cf23bb0db16a488b (image=quay.io/ceph/ceph:v20, name=ecstatic_allen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 02:14:39 np0005558317 podman[89948]: 2025-12-13 07:14:39.782096532 +0000 UTC m=+0.075900458 container start 2bcd13a6956644982eb6a40fa78c028b319da91aad15a2c2cf23bb0db16a488b (image=quay.io/ceph/ceph:v20, name=ecstatic_allen, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 02:14:39 np0005558317 podman[89948]: 2025-12-13 07:14:39.783200968 +0000 UTC m=+0.077004893 container attach 2bcd13a6956644982eb6a40fa78c028b319da91aad15a2c2cf23bb0db16a488b (image=quay.io/ceph/ceph:v20, name=ecstatic_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 02:14:39 np0005558317 podman[89948]: 2025-12-13 07:14:39.722994129 +0000 UTC m=+0.016798054 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:40 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Dec 13 02:14:40 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4080177198' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Dec 13 02:14:40 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v45: 7 pgs: 1 creating+peering, 4 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:14:40 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 02:14:40 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:14:40 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec 13 02:14:40 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:14:40 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4080177198' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 13 02:14:40 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:14:40 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Dec 13 02:14:40 np0005558317 ecstatic_allen[89960]: enabled application 'rbd' on pool 'volumes'
Dec 13 02:14:40 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Dec 13 02:14:40 np0005558317 ceph-mgr[75200]: [progress INFO root] update: starting ev b94c178d-fa47-4df1-8a52-3fbb7d542575 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 13 02:14:40 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Dec 13 02:14:40 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:14:40 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/2315924078' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 13 02:14:40 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:14:40 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/4080177198' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Dec 13 02:14:40 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:14:40 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:14:40 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/4080177198' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 13 02:14:40 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:14:40 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:14:40 np0005558317 systemd[1]: libpod-2bcd13a6956644982eb6a40fa78c028b319da91aad15a2c2cf23bb0db16a488b.scope: Deactivated successfully.
Dec 13 02:14:40 np0005558317 conmon[89960]: conmon 2bcd13a6956644982eb6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2bcd13a6956644982eb6a40fa78c028b319da91aad15a2c2cf23bb0db16a488b.scope/container/memory.events
Dec 13 02:14:40 np0005558317 podman[89948]: 2025-12-13 07:14:40.465052166 +0000 UTC m=+0.758856102 container died 2bcd13a6956644982eb6a40fa78c028b319da91aad15a2c2cf23bb0db16a488b (image=quay.io/ceph/ceph:v20, name=ecstatic_allen, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 02:14:40 np0005558317 systemd[1]: var-lib-containers-storage-overlay-c3963b6b96fb103b955b721fe7c90da14278ab054b9f24f6879fa55de43ef129-merged.mount: Deactivated successfully.
Dec 13 02:14:40 np0005558317 podman[89948]: 2025-12-13 07:14:40.482999188 +0000 UTC m=+0.776803114 container remove 2bcd13a6956644982eb6a40fa78c028b319da91aad15a2c2cf23bb0db16a488b (image=quay.io/ceph/ceph:v20, name=ecstatic_allen, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 02:14:40 np0005558317 systemd[1]: libpod-conmon-2bcd13a6956644982eb6a40fa78c028b319da91aad15a2c2cf23bb0db16a488b.scope: Deactivated successfully.
Dec 13 02:14:40 np0005558317 python3[90020]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:40 np0005558317 podman[90021]: 2025-12-13 07:14:40.73252794 +0000 UTC m=+0.027765828 container create d8f476a2696e752cbfe04867c31133b8c4daba31be20c15fac0e62c2dd264c8b (image=quay.io/ceph/ceph:v20, name=stoic_colden, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 02:14:40 np0005558317 systemd[1]: Started libpod-conmon-d8f476a2696e752cbfe04867c31133b8c4daba31be20c15fac0e62c2dd264c8b.scope.
Dec 13 02:14:40 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:40 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ae9864bc1038050d6931ffdd98fc340eb28b8f1d60f2a16228d46f7ae568d40/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:40 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ae9864bc1038050d6931ffdd98fc340eb28b8f1d60f2a16228d46f7ae568d40/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:40 np0005558317 podman[90021]: 2025-12-13 07:14:40.788349599 +0000 UTC m=+0.083587485 container init d8f476a2696e752cbfe04867c31133b8c4daba31be20c15fac0e62c2dd264c8b (image=quay.io/ceph/ceph:v20, name=stoic_colden, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:40 np0005558317 podman[90021]: 2025-12-13 07:14:40.792761941 +0000 UTC m=+0.087999828 container start d8f476a2696e752cbfe04867c31133b8c4daba31be20c15fac0e62c2dd264c8b (image=quay.io/ceph/ceph:v20, name=stoic_colden, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 02:14:40 np0005558317 podman[90021]: 2025-12-13 07:14:40.793966414 +0000 UTC m=+0.089204302 container attach d8f476a2696e752cbfe04867c31133b8c4daba31be20c15fac0e62c2dd264c8b (image=quay.io/ceph/ceph:v20, name=stoic_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:40 np0005558317 podman[90021]: 2025-12-13 07:14:40.721490034 +0000 UTC m=+0.016727941 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:41 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Dec 13 02:14:41 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2829471765' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Dec 13 02:14:41 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec 13 02:14:41 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:14:41 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2829471765' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 13 02:14:41 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Dec 13 02:14:41 np0005558317 stoic_colden[90034]: enabled application 'rbd' on pool 'backups'
Dec 13 02:14:41 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Dec 13 02:14:41 np0005558317 ceph-mgr[75200]: [progress INFO root] update: starting ev 4f8dd59e-3051-4e46-8d31-fee7db51a7ff (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 13 02:14:41 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Dec 13 02:14:41 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:14:41 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/2829471765' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Dec 13 02:14:41 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:14:41 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/2829471765' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 13 02:14:41 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:14:41 np0005558317 systemd[1]: libpod-d8f476a2696e752cbfe04867c31133b8c4daba31be20c15fac0e62c2dd264c8b.scope: Deactivated successfully.
Dec 13 02:14:41 np0005558317 conmon[90034]: conmon d8f476a2696e752cbfe0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d8f476a2696e752cbfe04867c31133b8c4daba31be20c15fac0e62c2dd264c8b.scope/container/memory.events
Dec 13 02:14:41 np0005558317 podman[90021]: 2025-12-13 07:14:41.466241374 +0000 UTC m=+0.761479261 container died d8f476a2696e752cbfe04867c31133b8c4daba31be20c15fac0e62c2dd264c8b (image=quay.io/ceph/ceph:v20, name=stoic_colden, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 02:14:41 np0005558317 systemd[1]: var-lib-containers-storage-overlay-4ae9864bc1038050d6931ffdd98fc340eb28b8f1d60f2a16228d46f7ae568d40-merged.mount: Deactivated successfully.
Dec 13 02:14:41 np0005558317 podman[90021]: 2025-12-13 07:14:41.484264368 +0000 UTC m=+0.779502255 container remove d8f476a2696e752cbfe04867c31133b8c4daba31be20c15fac0e62c2dd264c8b (image=quay.io/ceph/ceph:v20, name=stoic_colden, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:41 np0005558317 systemd[1]: libpod-conmon-d8f476a2696e752cbfe04867c31133b8c4daba31be20c15fac0e62c2dd264c8b.scope: Deactivated successfully.
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 24 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=24 pruub=8.740574837s) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active pruub 31.924657822s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 24 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=24 pruub=8.740574837s) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown pruub 31.924657822s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 python3[90094]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:41 np0005558317 podman[90095]: 2025-12-13 07:14:41.726685922 +0000 UTC m=+0.024455886 container create 9045833b68e740863271022224ee0fa5d5f6817543a86a56703460fcc48a3355 (image=quay.io/ceph/ceph:v20, name=crazy_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 02:14:41 np0005558317 systemd[1]: Started libpod-conmon-9045833b68e740863271022224ee0fa5d5f6817543a86a56703460fcc48a3355.scope.
Dec 13 02:14:41 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:41 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7505eccb8a0be9457a2d2f8158f8aa7fafb8e9a0c968ad7c101b18585e380e4a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:41 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7505eccb8a0be9457a2d2f8158f8aa7fafb8e9a0c968ad7c101b18585e380e4a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:41 np0005558317 podman[90095]: 2025-12-13 07:14:41.770675412 +0000 UTC m=+0.068445386 container init 9045833b68e740863271022224ee0fa5d5f6817543a86a56703460fcc48a3355 (image=quay.io/ceph/ceph:v20, name=crazy_wu, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:41 np0005558317 podman[90095]: 2025-12-13 07:14:41.774657136 +0000 UTC m=+0.072427099 container start 9045833b68e740863271022224ee0fa5d5f6817543a86a56703460fcc48a3355 (image=quay.io/ceph/ceph:v20, name=crazy_wu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:41 np0005558317 podman[90095]: 2025-12-13 07:14:41.775689225 +0000 UTC m=+0.073459189 container attach 9045833b68e740863271022224ee0fa5d5f6817543a86a56703460fcc48a3355 (image=quay.io/ceph/ceph:v20, name=crazy_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.1d( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.1f( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.1e( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.1c( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.b( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.a( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.9( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.8( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.6( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.5( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.4( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.3( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.2( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.1( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.7( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.c( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.f( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.10( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.11( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.e( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.d( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.12( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.13( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.14( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.15( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.16( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.18( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.19( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.1a( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.17( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.1b( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:41 np0005558317 podman[90095]: 2025-12-13 07:14:41.717135883 +0000 UTC m=+0.014905867 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Dec 13 02:14:42 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3539658522' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Dec 13 02:14:42 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v48: 38 pgs: 6 active+clean, 32 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:14:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 02:14:42 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:14:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 02:14:42 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:14:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec 13 02:14:42 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:14:42 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3539658522' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 13 02:14:42 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:14:42 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:14:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Dec 13 02:14:42 np0005558317 crazy_wu[90108]: enabled application 'rbd' on pool 'images'
Dec 13 02:14:42 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 26 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=26 pruub=9.974254608s) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active pruub 39.922641754s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:42 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Dec 13 02:14:42 np0005558317 ceph-mgr[75200]: [progress INFO root] update: starting ev ab92441d-cd7c-4669-8597-8e45e7d32a0d (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 13 02:14:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0)
Dec 13 02:14:42 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:14:42 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 26 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=26 pruub=9.974254608s) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown pruub 39.922641754s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.1f( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.1d( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.b( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.1e( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.1c( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.a( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.9( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.8( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.5( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.3( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.6( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.4( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.1( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.2( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.7( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.c( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.e( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.0( empty local-lis/les=24/26 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.11( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.10( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.12( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.13( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.14( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.16( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.17( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.15( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.19( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.1a( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.1b( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.d( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.f( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.18( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:42 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/3539658522' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Dec 13 02:14:42 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:14:42 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:14:42 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:14:42 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/3539658522' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 13 02:14:42 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:14:42 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:14:42 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:14:42 np0005558317 systemd[1]: libpod-9045833b68e740863271022224ee0fa5d5f6817543a86a56703460fcc48a3355.scope: Deactivated successfully.
Dec 13 02:14:42 np0005558317 podman[90095]: 2025-12-13 07:14:42.471034054 +0000 UTC m=+0.768804018 container died 9045833b68e740863271022224ee0fa5d5f6817543a86a56703460fcc48a3355 (image=quay.io/ceph/ceph:v20, name=crazy_wu, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:42 np0005558317 systemd[1]: var-lib-containers-storage-overlay-7505eccb8a0be9457a2d2f8158f8aa7fafb8e9a0c968ad7c101b18585e380e4a-merged.mount: Deactivated successfully.
Dec 13 02:14:42 np0005558317 podman[90095]: 2025-12-13 07:14:42.488719133 +0000 UTC m=+0.786489108 container remove 9045833b68e740863271022224ee0fa5d5f6817543a86a56703460fcc48a3355 (image=quay.io/ceph/ceph:v20, name=crazy_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 02:14:42 np0005558317 systemd[1]: libpod-conmon-9045833b68e740863271022224ee0fa5d5f6817543a86a56703460fcc48a3355.scope: Deactivated successfully.
Dec 13 02:14:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:14:42 np0005558317 python3[90167]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:42 np0005558317 podman[90168]: 2025-12-13 07:14:42.732591836 +0000 UTC m=+0.025515468 container create bcef0b9f440e271a77a271c60899bbe9642badd2701942923565f2064e93ad78 (image=quay.io/ceph/ceph:v20, name=youthful_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 02:14:42 np0005558317 systemd[1]: Started libpod-conmon-bcef0b9f440e271a77a271c60899bbe9642badd2701942923565f2064e93ad78.scope.
Dec 13 02:14:42 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:42 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0ee5b421dc373cd35c3ce81af1aca0cc4147e1fdcf7a9b811466eb24bac979/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:42 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0ee5b421dc373cd35c3ce81af1aca0cc4147e1fdcf7a9b811466eb24bac979/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:42 np0005558317 podman[90168]: 2025-12-13 07:14:42.785454742 +0000 UTC m=+0.078378374 container init bcef0b9f440e271a77a271c60899bbe9642badd2701942923565f2064e93ad78 (image=quay.io/ceph/ceph:v20, name=youthful_varahamihira, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:42 np0005558317 podman[90168]: 2025-12-13 07:14:42.7899332 +0000 UTC m=+0.082856831 container start bcef0b9f440e271a77a271c60899bbe9642badd2701942923565f2064e93ad78 (image=quay.io/ceph/ceph:v20, name=youthful_varahamihira, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:42 np0005558317 podman[90168]: 2025-12-13 07:14:42.791503451 +0000 UTC m=+0.084427103 container attach bcef0b9f440e271a77a271c60899bbe9642badd2701942923565f2064e93ad78 (image=quay.io/ceph/ceph:v20, name=youthful_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 02:14:42 np0005558317 podman[90168]: 2025-12-13 07:14:42.722233287 +0000 UTC m=+0.015156939 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:43 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Dec 13 02:14:43 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/904926521' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Dec 13 02:14:43 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec 13 02:14:43 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:14:43 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/904926521' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 13 02:14:43 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Dec 13 02:14:43 np0005558317 youthful_varahamihira[90180]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec 13 02:14:43 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 26 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=26 pruub=15.961985588s) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active pruub 43.764850616s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1e( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1d( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1c( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.7( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.8( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.b( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1f( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.6( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.a( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1b( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.5( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1a( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.9( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.4( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.19( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.3( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.2( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.c( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.d( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.e( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.10( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.11( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.12( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.13( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.14( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.15( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.16( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.17( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.f( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.18( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=26 pruub=15.961985588s) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown pruub 43.764850616s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.13( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.11( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.15( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.14( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.17( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.16( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.18( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.19( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.1a( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.1b( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.1d( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.1c( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.1e( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.1f( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.12( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.2( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.4( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.3( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.6( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.5( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.7( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.8( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.a( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.c( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.b( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.e( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.d( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.f( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.10( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.1( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1e( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1d( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.7( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.b( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.8( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1f( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.6( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1c( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.a( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-mgr[75200]: [progress INFO root] update: starting ev 01779521-6cb6-4003-b6fb-45475e592b47 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec 13 02:14:43 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Dec 13 02:14:43 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:14:43 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.9( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1b( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1a( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.9( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.4( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.19( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.2( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.3( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.5( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.0( empty local-lis/les=26/27 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.c( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.d( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.10( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.11( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.12( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.13( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.14( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.15( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.16( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.17( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.e( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.18( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.f( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:43 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/904926521' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Dec 13 02:14:43 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:14:43 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/904926521' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 13 02:14:43 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:14:43 np0005558317 systemd[1]: libpod-bcef0b9f440e271a77a271c60899bbe9642badd2701942923565f2064e93ad78.scope: Deactivated successfully.
Dec 13 02:14:43 np0005558317 conmon[90180]: conmon bcef0b9f440e271a77a2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bcef0b9f440e271a77a271c60899bbe9642badd2701942923565f2064e93ad78.scope/container/memory.events
Dec 13 02:14:43 np0005558317 podman[90168]: 2025-12-13 07:14:43.47654756 +0000 UTC m=+0.769471193 container died bcef0b9f440e271a77a271c60899bbe9642badd2701942923565f2064e93ad78 (image=quay.io/ceph/ceph:v20, name=youthful_varahamihira, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 02:14:43 np0005558317 systemd[1]: var-lib-containers-storage-overlay-ca0ee5b421dc373cd35c3ce81af1aca0cc4147e1fdcf7a9b811466eb24bac979-merged.mount: Deactivated successfully.
Dec 13 02:14:43 np0005558317 podman[90168]: 2025-12-13 07:14:43.494130567 +0000 UTC m=+0.787054200 container remove bcef0b9f440e271a77a271c60899bbe9642badd2701942923565f2064e93ad78 (image=quay.io/ceph/ceph:v20, name=youthful_varahamihira, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:14:43 np0005558317 systemd[1]: libpod-conmon-bcef0b9f440e271a77a271c60899bbe9642badd2701942923565f2064e93ad78.scope: Deactivated successfully.
Dec 13 02:14:43 np0005558317 python3[90241]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:43 np0005558317 podman[90242]: 2025-12-13 07:14:43.751290974 +0000 UTC m=+0.030271677 container create eefeecc58ad7e1dc6d6366319dd9a3e25b752cbd112288a67c2131adc9fdf5ba (image=quay.io/ceph/ceph:v20, name=fervent_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 02:14:43 np0005558317 systemd[1]: Started libpod-conmon-eefeecc58ad7e1dc6d6366319dd9a3e25b752cbd112288a67c2131adc9fdf5ba.scope.
Dec 13 02:14:43 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:43 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2e0cc8d17b37b4fa0b5bc1f8ac57745ccf56f35bb669ae0830d4227eabccd0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:43 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2e0cc8d17b37b4fa0b5bc1f8ac57745ccf56f35bb669ae0830d4227eabccd0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:43 np0005558317 podman[90242]: 2025-12-13 07:14:43.809099175 +0000 UTC m=+0.088079898 container init eefeecc58ad7e1dc6d6366319dd9a3e25b752cbd112288a67c2131adc9fdf5ba (image=quay.io/ceph/ceph:v20, name=fervent_elgamal, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:43 np0005558317 podman[90242]: 2025-12-13 07:14:43.812786916 +0000 UTC m=+0.091767620 container start eefeecc58ad7e1dc6d6366319dd9a3e25b752cbd112288a67c2131adc9fdf5ba (image=quay.io/ceph/ceph:v20, name=fervent_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 02:14:43 np0005558317 podman[90242]: 2025-12-13 07:14:43.814112438 +0000 UTC m=+0.093093140 container attach eefeecc58ad7e1dc6d6366319dd9a3e25b752cbd112288a67c2131adc9fdf5ba (image=quay.io/ceph/ceph:v20, name=fervent_elgamal, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:43 np0005558317 podman[90242]: 2025-12-13 07:14:43.73824291 +0000 UTC m=+0.017223633 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:44 np0005558317 ceph-mgr[75200]: [progress WARNING root] Starting Global Recovery Event,62 pgs not in active + clean state
Dec 13 02:14:44 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Dec 13 02:14:44 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1328049796' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Dec 13 02:14:44 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v51: 100 pgs: 38 active+clean, 62 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:14:44 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 02:14:44 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:14:44 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 02:14:44 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:14:44 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Dec 13 02:14:44 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Dec 13 02:14:44 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec 13 02:14:44 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:14:44 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1328049796' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 13 02:14:44 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:14:44 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:14:44 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Dec 13 02:14:44 np0005558317 fervent_elgamal[90254]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec 13 02:14:44 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.1d( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.1f( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.1b( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.9( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.8( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.1e( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.1c( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.7( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-mgr[75200]: [progress INFO root] update: starting ev 0c63b812-d8ea-45f7-a85d-0ba9cc684e1f (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 13 02:14:44 np0005558317 ceph-mgr[75200]: [progress INFO root] complete: finished ev 3d02a20f-ab72-4b9e-89db-75b6b986c2c7 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 13 02:14:44 np0005558317 ceph-mgr[75200]: [progress INFO root] Completed event 3d02a20f-ab72-4b9e-89db-75b6b986c2c7 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Dec 13 02:14:44 np0005558317 ceph-mgr[75200]: [progress INFO root] complete: finished ev b94c178d-fa47-4df1-8a52-3fbb7d542575 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 13 02:14:44 np0005558317 ceph-mgr[75200]: [progress INFO root] Completed event b94c178d-fa47-4df1-8a52-3fbb7d542575 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Dec 13 02:14:44 np0005558317 ceph-mgr[75200]: [progress INFO root] complete: finished ev 4f8dd59e-3051-4e46-8d31-fee7db51a7ff (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.a( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-mgr[75200]: [progress INFO root] Completed event 4f8dd59e-3051-4e46-8d31-fee7db51a7ff (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Dec 13 02:14:44 np0005558317 ceph-mgr[75200]: [progress INFO root] complete: finished ev ab92441d-cd7c-4669-8597-8e45e7d32a0d (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 13 02:14:44 np0005558317 ceph-mgr[75200]: [progress INFO root] Completed event ab92441d-cd7c-4669-8597-8e45e7d32a0d (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.1( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.6( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.4( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.5( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.0( empty local-lis/les=26/28 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.2( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.c( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.b( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.3( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.d( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.e( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.f( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.12( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.10( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.14( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.13( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.15( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.17( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.18( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.16( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.11( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.1a( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.19( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:44 np0005558317 ceph-mgr[75200]: [progress INFO root] complete: finished ev 01779521-6cb6-4003-b6fb-45475e592b47 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec 13 02:14:44 np0005558317 ceph-mgr[75200]: [progress INFO root] Completed event 01779521-6cb6-4003-b6fb-45475e592b47 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Dec 13 02:14:44 np0005558317 ceph-mgr[75200]: [progress INFO root] complete: finished ev 0c63b812-d8ea-45f7-a85d-0ba9cc684e1f (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 13 02:14:44 np0005558317 ceph-mgr[75200]: [progress INFO root] Completed event 0c63b812-d8ea-45f7-a85d-0ba9cc684e1f (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Dec 13 02:14:44 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/1328049796' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Dec 13 02:14:44 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:14:44 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:14:44 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:14:44 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/1328049796' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 13 02:14:44 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:14:44 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:14:44 np0005558317 systemd[1]: libpod-eefeecc58ad7e1dc6d6366319dd9a3e25b752cbd112288a67c2131adc9fdf5ba.scope: Deactivated successfully.
Dec 13 02:14:44 np0005558317 podman[90242]: 2025-12-13 07:14:44.474551451 +0000 UTC m=+0.753532164 container died eefeecc58ad7e1dc6d6366319dd9a3e25b752cbd112288a67c2131adc9fdf5ba (image=quay.io/ceph/ceph:v20, name=fervent_elgamal, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:44 np0005558317 systemd[1]: var-lib-containers-storage-overlay-4e2e0cc8d17b37b4fa0b5bc1f8ac57745ccf56f35bb669ae0830d4227eabccd0-merged.mount: Deactivated successfully.
Dec 13 02:14:44 np0005558317 podman[90242]: 2025-12-13 07:14:44.494830807 +0000 UTC m=+0.773811510 container remove eefeecc58ad7e1dc6d6366319dd9a3e25b752cbd112288a67c2131adc9fdf5ba (image=quay.io/ceph/ceph:v20, name=fervent_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:44 np0005558317 systemd[1]: libpod-conmon-eefeecc58ad7e1dc6d6366319dd9a3e25b752cbd112288a67c2131adc9fdf5ba.scope: Deactivated successfully.
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 28 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=9.272872925s) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 41.943721771s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 28 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=9.272872925s) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown pruub 41.943721771s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 28 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.253122330s) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.938323975s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 28 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.253122330s) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown pruub 34.938323975s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 python3[90363]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.b scrub starts
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.b scrub ok
Dec 13 02:14:45 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec 13 02:14:45 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1c( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1d( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1e( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1f( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.10( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.12( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.11( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.13( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.14( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.15( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.16( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.17( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.8( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.a( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.b( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.7( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.6( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.5( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.4( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.3( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.2( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.f( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.e( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.9( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.d( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.c( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1b( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1a( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.19( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.18( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1c( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1e( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1f( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1d( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.10( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.11( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.14( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.12( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.15( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.8( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.17( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.a( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.b( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.16( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.13( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.7( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.6( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.5( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.0( empty local-lis/les=28/29 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.2( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.4( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.3( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.f( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.e( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.d( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.9( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.c( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1a( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1b( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.18( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.19( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1a( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.15( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.14( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.17( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.16( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.11( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.10( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.13( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.12( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.c( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.d( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.f( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.e( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.2( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.3( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1b( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.6( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.b( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.18( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.7( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.19( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.4( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.8( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.9( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.5( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.a( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1e( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1f( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1c( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1d( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1a( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.16( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.14( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.15( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.17( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.10( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.11( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.13( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.12( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.c( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.d( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.0( empty local-lis/les=28/29 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.e( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.2( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.f( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.3( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1b( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.6( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.b( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.18( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.4( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.9( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.5( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.a( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1e( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1f( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1c( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1d( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.19( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.8( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.7( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:45 np0005558317 python3[90434]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765610085.1056552-36981-271292940554060/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:14:46 np0005558317 python3[90536]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 02:14:46 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v54: 162 pgs: 69 active+clean, 93 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:14:46 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 02:14:46 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:14:46 np0005558317 python3[90611]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765610085.8035133-36995-263164690150905/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=e0c31109065f9377d9a1ac1458da111ccd8d5eb7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:14:46 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec 13 02:14:46 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:14:46 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Dec 13 02:14:46 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:14:46 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Dec 13 02:14:46 np0005558317 python3[90661]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:46 np0005558317 podman[90662]: 2025-12-13 07:14:46.604424887 +0000 UTC m=+0.029620023 container create ece869f1b61462d684212cd9ae598e74f922f244b46082ce46f6fb72bda832b0 (image=quay.io/ceph/ceph:v20, name=frosty_boyd, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:46 np0005558317 systemd[1]: Started libpod-conmon-ece869f1b61462d684212cd9ae598e74f922f244b46082ce46f6fb72bda832b0.scope.
Dec 13 02:14:46 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:46 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c0ef990b6dbff8a861238c9762a046eb4f42a6c963d19b8ea3e5f53798184a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:46 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c0ef990b6dbff8a861238c9762a046eb4f42a6c963d19b8ea3e5f53798184a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:46 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c0ef990b6dbff8a861238c9762a046eb4f42a6c963d19b8ea3e5f53798184a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:46 np0005558317 podman[90662]: 2025-12-13 07:14:46.657672006 +0000 UTC m=+0.082867142 container init ece869f1b61462d684212cd9ae598e74f922f244b46082ce46f6fb72bda832b0 (image=quay.io/ceph/ceph:v20, name=frosty_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:46 np0005558317 podman[90662]: 2025-12-13 07:14:46.661693724 +0000 UTC m=+0.086888851 container start ece869f1b61462d684212cd9ae598e74f922f244b46082ce46f6fb72bda832b0 (image=quay.io/ceph/ceph:v20, name=frosty_boyd, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:46 np0005558317 podman[90662]: 2025-12-13 07:14:46.662644191 +0000 UTC m=+0.087839327 container attach ece869f1b61462d684212cd9ae598e74f922f244b46082ce46f6fb72bda832b0 (image=quay.io/ceph/ceph:v20, name=frosty_boyd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Dec 13 02:14:46 np0005558317 podman[90662]: 2025-12-13 07:14:46.592293486 +0000 UTC m=+0.017488642 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:46 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 13 02:14:46 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2924988186' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 13 02:14:46 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2924988186' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 13 02:14:46 np0005558317 frosty_boyd[90674]: 
Dec 13 02:14:46 np0005558317 frosty_boyd[90674]: [global]
Dec 13 02:14:46 np0005558317 frosty_boyd[90674]: #011fsid = 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 02:14:46 np0005558317 frosty_boyd[90674]: #011mon_host = 192.168.122.100
Dec 13 02:14:46 np0005558317 frosty_boyd[90674]: #011rgw_keystone_api_version = 3
Dec 13 02:14:46 np0005558317 systemd[1]: libpod-ece869f1b61462d684212cd9ae598e74f922f244b46082ce46f6fb72bda832b0.scope: Deactivated successfully.
Dec 13 02:14:47 np0005558317 podman[90706]: 2025-12-13 07:14:47.018935836 +0000 UTC m=+0.015389596 container died ece869f1b61462d684212cd9ae598e74f922f244b46082ce46f6fb72bda832b0 (image=quay.io/ceph/ceph:v20, name=frosty_boyd, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 02:14:47 np0005558317 systemd[1]: var-lib-containers-storage-overlay-70c0ef990b6dbff8a861238c9762a046eb4f42a6c963d19b8ea3e5f53798184a-merged.mount: Deactivated successfully.
Dec 13 02:14:47 np0005558317 podman[90706]: 2025-12-13 07:14:47.041107428 +0000 UTC m=+0.037561178 container remove ece869f1b61462d684212cd9ae598e74f922f244b46082ce46f6fb72bda832b0 (image=quay.io/ceph/ceph:v20, name=frosty_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:47 np0005558317 systemd[1]: libpod-conmon-ece869f1b61462d684212cd9ae598e74f922f244b46082ce46f6fb72bda832b0.scope: Deactivated successfully.
Dec 13 02:14:47 np0005558317 python3[90786]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:47 np0005558317 podman[90803]: 2025-12-13 07:14:47.324136845 +0000 UTC m=+0.029271578 container create 48e496cd13c1932f6fa7c80eb02e94e322d77c73315376fb66831d33fb076899 (image=quay.io/ceph/ceph:v20, name=reverent_murdock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Dec 13 02:14:47 np0005558317 systemd[1]: Started libpod-conmon-48e496cd13c1932f6fa7c80eb02e94e322d77c73315376fb66831d33fb076899.scope.
Dec 13 02:14:47 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:47 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fa00fdd55525d739f4830194831740be7293c830650cb0b38a15f450ea3bd62/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:47 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fa00fdd55525d739f4830194831740be7293c830650cb0b38a15f450ea3bd62/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:47 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fa00fdd55525d739f4830194831740be7293c830650cb0b38a15f450ea3bd62/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:47 np0005558317 podman[90803]: 2025-12-13 07:14:47.378466398 +0000 UTC m=+0.083601142 container init 48e496cd13c1932f6fa7c80eb02e94e322d77c73315376fb66831d33fb076899 (image=quay.io/ceph/ceph:v20, name=reverent_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 02:14:47 np0005558317 podman[90803]: 2025-12-13 07:14:47.382720364 +0000 UTC m=+0.087855087 container start 48e496cd13c1932f6fa7c80eb02e94e322d77c73315376fb66831d33fb076899 (image=quay.io/ceph/ceph:v20, name=reverent_murdock, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:47 np0005558317 podman[90803]: 2025-12-13 07:14:47.383858452 +0000 UTC m=+0.088993175 container attach 48e496cd13c1932f6fa7c80eb02e94e322d77c73315376fb66831d33fb076899 (image=quay.io/ceph/ceph:v20, name=reverent_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:47 np0005558317 podman[90803]: 2025-12-13 07:14:47.311908341 +0000 UTC m=+0.017043085 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:47 np0005558317 podman[90837]: 2025-12-13 07:14:47.41936622 +0000 UTC m=+0.035743482 container exec 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:14:47 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:14:47 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/2924988186' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 13 02:14:47 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/2924988186' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 13 02:14:47 np0005558317 podman[90837]: 2025-12-13 07:14:47.499315395 +0000 UTC m=+0.115692658 container exec_died 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 30 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=30 pruub=15.852033615s) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active pruub 47.802551270s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 30 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=30 pruub=15.852033615s) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown pruub 47.802551270s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:14:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Dec 13 02:14:47 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2200430480' entity='client.admin' 
Dec 13 02:14:47 np0005558317 reverent_murdock[90831]: set ssl_option
Dec 13 02:14:47 np0005558317 systemd[1]: libpod-48e496cd13c1932f6fa7c80eb02e94e322d77c73315376fb66831d33fb076899.scope: Deactivated successfully.
Dec 13 02:14:47 np0005558317 podman[90803]: 2025-12-13 07:14:47.818447439 +0000 UTC m=+0.523582161 container died 48e496cd13c1932f6fa7c80eb02e94e322d77c73315376fb66831d33fb076899 (image=quay.io/ceph/ceph:v20, name=reverent_murdock, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 02:14:47 np0005558317 systemd[1]: var-lib-containers-storage-overlay-7fa00fdd55525d739f4830194831740be7293c830650cb0b38a15f450ea3bd62-merged.mount: Deactivated successfully.
Dec 13 02:14:47 np0005558317 podman[90803]: 2025-12-13 07:14:47.842770384 +0000 UTC m=+0.547905107 container remove 48e496cd13c1932f6fa7c80eb02e94e322d77c73315376fb66831d33fb076899 (image=quay.io/ceph/ceph:v20, name=reverent_murdock, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 02:14:47 np0005558317 systemd[1]: libpod-conmon-48e496cd13c1932f6fa7c80eb02e94e322d77c73315376fb66831d33fb076899.scope: Deactivated successfully.
Dec 13 02:14:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:14:47 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:14:47 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:14:47 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:14:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:14:47 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:14:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:14:47 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:14:47 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:14:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:14:47 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:14:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:14:47 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Dec 13 02:14:48 np0005558317 python3[91042]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:48 np0005558317 podman[91067]: 2025-12-13 07:14:48.125484388 +0000 UTC m=+0.028093502 container create 1043b00bf01b041ecfb320e84e792c128d857b61dbd3a2377cb53152f7ca2b96 (image=quay.io/ceph/ceph:v20, name=nifty_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Dec 13 02:14:48 np0005558317 systemd[1]: Started libpod-conmon-1043b00bf01b041ecfb320e84e792c128d857b61dbd3a2377cb53152f7ca2b96.scope.
Dec 13 02:14:48 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:48 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7605ecef08d9aee831369bdda9ffcf1dfbad48530a90cd5fdce5b9c19f64bb91/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:48 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7605ecef08d9aee831369bdda9ffcf1dfbad48530a90cd5fdce5b9c19f64bb91/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:48 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7605ecef08d9aee831369bdda9ffcf1dfbad48530a90cd5fdce5b9c19f64bb91/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:48 np0005558317 podman[91067]: 2025-12-13 07:14:48.172488082 +0000 UTC m=+0.075097216 container init 1043b00bf01b041ecfb320e84e792c128d857b61dbd3a2377cb53152f7ca2b96 (image=quay.io/ceph/ceph:v20, name=nifty_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:48 np0005558317 podman[91067]: 2025-12-13 07:14:48.176942765 +0000 UTC m=+0.079551879 container start 1043b00bf01b041ecfb320e84e792c128d857b61dbd3a2377cb53152f7ca2b96 (image=quay.io/ceph/ceph:v20, name=nifty_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:48 np0005558317 podman[91067]: 2025-12-13 07:14:48.178763116 +0000 UTC m=+0.081372230 container attach 1043b00bf01b041ecfb320e84e792c128d857b61dbd3a2377cb53152f7ca2b96 (image=quay.io/ceph/ceph:v20, name=nifty_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:48 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v56: 193 pgs: 131 active+clean, 62 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:14:48 np0005558317 podman[91067]: 2025-12-13 07:14:48.114565847 +0000 UTC m=+0.017174981 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:48 np0005558317 podman[91094]: 2025-12-13 07:14:48.231039431 +0000 UTC m=+0.027090788 container create 436ae0f5ebebca3aef22d8a5b068d1b1fdb6c54fae6267e67b8f229961493133 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_faraday, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 02:14:48 np0005558317 systemd[1]: Started libpod-conmon-436ae0f5ebebca3aef22d8a5b068d1b1fdb6c54fae6267e67b8f229961493133.scope.
Dec 13 02:14:48 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:48 np0005558317 podman[91094]: 2025-12-13 07:14:48.280390697 +0000 UTC m=+0.076442075 container init 436ae0f5ebebca3aef22d8a5b068d1b1fdb6c54fae6267e67b8f229961493133 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 02:14:48 np0005558317 podman[91094]: 2025-12-13 07:14:48.285643439 +0000 UTC m=+0.081694798 container start 436ae0f5ebebca3aef22d8a5b068d1b1fdb6c54fae6267e67b8f229961493133 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:48 np0005558317 podman[91094]: 2025-12-13 07:14:48.286824248 +0000 UTC m=+0.082875607 container attach 436ae0f5ebebca3aef22d8a5b068d1b1fdb6c54fae6267e67b8f229961493133 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_faraday, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 02:14:48 np0005558317 silly_faraday[91108]: 167 167
Dec 13 02:14:48 np0005558317 systemd[1]: libpod-436ae0f5ebebca3aef22d8a5b068d1b1fdb6c54fae6267e67b8f229961493133.scope: Deactivated successfully.
Dec 13 02:14:48 np0005558317 podman[91094]: 2025-12-13 07:14:48.289173083 +0000 UTC m=+0.085224441 container died 436ae0f5ebebca3aef22d8a5b068d1b1fdb6c54fae6267e67b8f229961493133 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 02:14:48 np0005558317 systemd[1]: var-lib-containers-storage-overlay-43b94e35ede8accac8edf75c9632a89dff68031c0c0f606fc33afb9b60cb71c5-merged.mount: Deactivated successfully.
Dec 13 02:14:48 np0005558317 podman[91094]: 2025-12-13 07:14:48.307412896 +0000 UTC m=+0.103464253 container remove 436ae0f5ebebca3aef22d8a5b068d1b1fdb6c54fae6267e67b8f229961493133 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:48 np0005558317 podman[91094]: 2025-12-13 07:14:48.220620828 +0000 UTC m=+0.016672206 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:48 np0005558317 systemd[1]: libpod-conmon-436ae0f5ebebca3aef22d8a5b068d1b1fdb6c54fae6267e67b8f229961493133.scope: Deactivated successfully.
Dec 13 02:14:48 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Dec 13 02:14:48 np0005558317 podman[91147]: 2025-12-13 07:14:48.422155375 +0000 UTC m=+0.027966274 container create 92fcfb75aed971f5b1d18a9713090730c23f1365c6e88f29c41ecfb43f996292 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 02:14:48 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Dec 13 02:14:48 np0005558317 systemd[1]: Started libpod-conmon-92fcfb75aed971f5b1d18a9713090730c23f1365c6e88f29c41ecfb43f996292.scope.
Dec 13 02:14:48 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:48 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca1625d6ee2c6ee98f4e3fdc88b4fc3b61b955aab46c6e6888c98065b972363/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:48 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca1625d6ee2c6ee98f4e3fdc88b4fc3b61b955aab46c6e6888c98065b972363/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:48 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca1625d6ee2c6ee98f4e3fdc88b4fc3b61b955aab46c6e6888c98065b972363/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:48 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca1625d6ee2c6ee98f4e3fdc88b4fc3b61b955aab46c6e6888c98065b972363/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:48 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca1625d6ee2c6ee98f4e3fdc88b4fc3b61b955aab46c6e6888c98065b972363/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:48 np0005558317 podman[91147]: 2025-12-13 07:14:48.475662172 +0000 UTC m=+0.081473081 container init 92fcfb75aed971f5b1d18a9713090730c23f1365c6e88f29c41ecfb43f996292 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_tharp, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:48 np0005558317 podman[91147]: 2025-12-13 07:14:48.481193628 +0000 UTC m=+0.087004527 container start 92fcfb75aed971f5b1d18a9713090730c23f1365c6e88f29c41ecfb43f996292 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:48 np0005558317 podman[91147]: 2025-12-13 07:14:48.482628065 +0000 UTC m=+0.088438964 container attach 92fcfb75aed971f5b1d18a9713090730c23f1365c6e88f29c41ecfb43f996292 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_tharp, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 02:14:48 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec 13 02:14:48 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Dec 13 02:14:48 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1e( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1d( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1c( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.13( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.12( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.17( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.10( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.16( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.15( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.14( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.b( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.a( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.9( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.8( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.f( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.6( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.4( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.5( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.7( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.2( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.3( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.11( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.c( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.d( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.e( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1f( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.18( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1a( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1b( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.19( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1e( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1c( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.13( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.12( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1d( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.17( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.10( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.16( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.15( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.14( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.b( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.a( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.9( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.8( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.f( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.6( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.4( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.0( empty local-lis/les=30/31 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.5( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.7( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.3( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.11( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.c( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.d( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.e( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1f( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.18( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.2( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1a( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1b( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.19( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:48 np0005558317 podman[91147]: 2025-12-13 07:14:48.409986293 +0000 UTC m=+0.015797212 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:48 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:14:48 np0005558317 ceph-mgr[75200]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Dec 13 02:14:48 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec 13 02:14:48 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 13 02:14:48 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:48 np0005558317 nifty_dubinsky[91079]: Scheduled rgw.rgw update...
Dec 13 02:14:48 np0005558317 systemd[1]: libpod-1043b00bf01b041ecfb320e84e792c128d857b61dbd3a2377cb53152f7ca2b96.scope: Deactivated successfully.
Dec 13 02:14:48 np0005558317 conmon[91079]: conmon 1043b00bf01b041ecfb3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1043b00bf01b041ecfb320e84e792c128d857b61dbd3a2377cb53152f7ca2b96.scope/container/memory.events
Dec 13 02:14:48 np0005558317 podman[91067]: 2025-12-13 07:14:48.545295629 +0000 UTC m=+0.447904743 container died 1043b00bf01b041ecfb320e84e792c128d857b61dbd3a2377cb53152f7ca2b96 (image=quay.io/ceph/ceph:v20, name=nifty_dubinsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:48 np0005558317 podman[91067]: 2025-12-13 07:14:48.564091475 +0000 UTC m=+0.466700590 container remove 1043b00bf01b041ecfb320e84e792c128d857b61dbd3a2377cb53152f7ca2b96 (image=quay.io/ceph/ceph:v20, name=nifty_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 02:14:48 np0005558317 systemd[1]: libpod-conmon-1043b00bf01b041ecfb320e84e792c128d857b61dbd3a2377cb53152f7ca2b96.scope: Deactivated successfully.
Dec 13 02:14:48 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/2200430480' entity='client.admin' 
Dec 13 02:14:48 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:48 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:48 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:14:48 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:48 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:14:48 np0005558317 ceph-mon[74928]: Saving service rgw.rgw spec with placement compute-0
Dec 13 02:14:48 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:48 np0005558317 systemd[1]: var-lib-containers-storage-overlay-7605ecef08d9aee831369bdda9ffcf1dfbad48530a90cd5fdce5b9c19f64bb91-merged.mount: Deactivated successfully.
Dec 13 02:14:48 np0005558317 kind_tharp[91160]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:14:48 np0005558317 kind_tharp[91160]: --> All data devices are unavailable
Dec 13 02:14:48 np0005558317 systemd[1]: libpod-92fcfb75aed971f5b1d18a9713090730c23f1365c6e88f29c41ecfb43f996292.scope: Deactivated successfully.
Dec 13 02:14:48 np0005558317 podman[91147]: 2025-12-13 07:14:48.853506165 +0000 UTC m=+0.459317074 container died 92fcfb75aed971f5b1d18a9713090730c23f1365c6e88f29c41ecfb43f996292 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_tharp, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 02:14:48 np0005558317 systemd[1]: var-lib-containers-storage-overlay-2ca1625d6ee2c6ee98f4e3fdc88b4fc3b61b955aab46c6e6888c98065b972363-merged.mount: Deactivated successfully.
Dec 13 02:14:48 np0005558317 podman[91147]: 2025-12-13 07:14:48.874019359 +0000 UTC m=+0.479830258 container remove 92fcfb75aed971f5b1d18a9713090730c23f1365c6e88f29c41ecfb43f996292 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_tharp, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 02:14:48 np0005558317 systemd[1]: libpod-conmon-92fcfb75aed971f5b1d18a9713090730c23f1365c6e88f29c41ecfb43f996292.scope: Deactivated successfully.
Dec 13 02:14:49 np0005558317 ceph-mgr[75200]: [progress INFO root] Writing back 9 completed events
Dec 13 02:14:49 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 13 02:14:49 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:49 np0005558317 podman[91318]: 2025-12-13 07:14:49.195713678 +0000 UTC m=+0.026068337 container create 184f20331906ca1783627b4319e7e8e4ec9fe5624dbde72ac98541589748f680 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 02:14:49 np0005558317 systemd[1]: Started libpod-conmon-184f20331906ca1783627b4319e7e8e4ec9fe5624dbde72ac98541589748f680.scope.
Dec 13 02:14:49 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:49 np0005558317 podman[91318]: 2025-12-13 07:14:49.248012975 +0000 UTC m=+0.078367654 container init 184f20331906ca1783627b4319e7e8e4ec9fe5624dbde72ac98541589748f680 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_galileo, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 02:14:49 np0005558317 podman[91318]: 2025-12-13 07:14:49.253019475 +0000 UTC m=+0.083374134 container start 184f20331906ca1783627b4319e7e8e4ec9fe5624dbde72ac98541589748f680 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 02:14:49 np0005558317 podman[91318]: 2025-12-13 07:14:49.254176129 +0000 UTC m=+0.084530788 container attach 184f20331906ca1783627b4319e7e8e4ec9fe5624dbde72ac98541589748f680 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_galileo, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:49 np0005558317 blissful_galileo[91350]: 167 167
Dec 13 02:14:49 np0005558317 systemd[1]: libpod-184f20331906ca1783627b4319e7e8e4ec9fe5624dbde72ac98541589748f680.scope: Deactivated successfully.
Dec 13 02:14:49 np0005558317 conmon[91350]: conmon 184f20331906ca178362 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-184f20331906ca1783627b4319e7e8e4ec9fe5624dbde72ac98541589748f680.scope/container/memory.events
Dec 13 02:14:49 np0005558317 podman[91318]: 2025-12-13 07:14:49.256759935 +0000 UTC m=+0.087114594 container died 184f20331906ca1783627b4319e7e8e4ec9fe5624dbde72ac98541589748f680 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_galileo, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 02:14:49 np0005558317 systemd[1]: var-lib-containers-storage-overlay-6ab6caba8a9af4fd5e6fd49115bc1bec5761e87e0662bc90cf7ead86ecda3428-merged.mount: Deactivated successfully.
Dec 13 02:14:49 np0005558317 podman[91318]: 2025-12-13 07:14:49.281213385 +0000 UTC m=+0.111568045 container remove 184f20331906ca1783627b4319e7e8e4ec9fe5624dbde72ac98541589748f680 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:49 np0005558317 podman[91318]: 2025-12-13 07:14:49.184976337 +0000 UTC m=+0.015331016 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:49 np0005558317 systemd[1]: libpod-conmon-184f20331906ca1783627b4319e7e8e4ec9fe5624dbde72ac98541589748f680.scope: Deactivated successfully.
Dec 13 02:14:49 np0005558317 python3[91347]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 02:14:49 np0005558317 podman[91395]: 2025-12-13 07:14:49.393938284 +0000 UTC m=+0.027188983 container create a7528cbca6354316f4e5ed3afc99e5cc24320406022c5b2fdc47cd992feb09a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:49 np0005558317 systemd[1]: Started libpod-conmon-a7528cbca6354316f4e5ed3afc99e5cc24320406022c5b2fdc47cd992feb09a2.scope.
Dec 13 02:14:49 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:49 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0dbe1ffe5bff55074ff138bb3a603334c9a6ced2cc89f9310552e1d8f91ccf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:49 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0dbe1ffe5bff55074ff138bb3a603334c9a6ced2cc89f9310552e1d8f91ccf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:49 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0dbe1ffe5bff55074ff138bb3a603334c9a6ced2cc89f9310552e1d8f91ccf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:49 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0dbe1ffe5bff55074ff138bb3a603334c9a6ced2cc89f9310552e1d8f91ccf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:49 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Dec 13 02:14:49 np0005558317 podman[91395]: 2025-12-13 07:14:49.443560269 +0000 UTC m=+0.076810978 container init a7528cbca6354316f4e5ed3afc99e5cc24320406022c5b2fdc47cd992feb09a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:49 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Dec 13 02:14:49 np0005558317 podman[91395]: 2025-12-13 07:14:49.448687777 +0000 UTC m=+0.081938475 container start a7528cbca6354316f4e5ed3afc99e5cc24320406022c5b2fdc47cd992feb09a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_goldstine, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 02:14:49 np0005558317 podman[91395]: 2025-12-13 07:14:49.449845282 +0000 UTC m=+0.083096001 container attach a7528cbca6354316f4e5ed3afc99e5cc24320406022c5b2fdc47cd992feb09a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 02:14:49 np0005558317 podman[91395]: 2025-12-13 07:14:49.383244325 +0000 UTC m=+0.016495044 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:49 np0005558317 python3[91458]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765610089.0970848-37036-173679259064353/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]: {
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:    "0": [
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:        {
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "devices": [
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "/dev/loop3"
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            ],
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "lv_name": "ceph_lv0",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "lv_size": "21470642176",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "name": "ceph_lv0",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "tags": {
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.cluster_name": "ceph",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.crush_device_class": "",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.encrypted": "0",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.objectstore": "bluestore",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.osd_id": "0",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.type": "block",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.vdo": "0",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.with_tpm": "0"
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            },
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "type": "block",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "vg_name": "ceph_vg0"
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:        }
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:    ],
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:    "1": [
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:        {
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "devices": [
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "/dev/loop4"
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            ],
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "lv_name": "ceph_lv1",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "lv_size": "21470642176",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "name": "ceph_lv1",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "tags": {
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.cluster_name": "ceph",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.crush_device_class": "",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.encrypted": "0",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.objectstore": "bluestore",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.osd_id": "1",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.type": "block",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.vdo": "0",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.with_tpm": "0"
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            },
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "type": "block",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "vg_name": "ceph_vg1"
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:        }
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:    ],
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:    "2": [
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:        {
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "devices": [
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "/dev/loop5"
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            ],
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "lv_name": "ceph_lv2",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "lv_size": "21470642176",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "name": "ceph_lv2",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "tags": {
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.cluster_name": "ceph",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.crush_device_class": "",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.encrypted": "0",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.objectstore": "bluestore",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.osd_id": "2",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.type": "block",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.vdo": "0",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:                "ceph.with_tpm": "0"
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            },
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "type": "block",
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:            "vg_name": "ceph_vg2"
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:        }
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]:    ]
Dec 13 02:14:49 np0005558317 modest_goldstine[91438]: }
Dec 13 02:14:49 np0005558317 systemd[1]: libpod-a7528cbca6354316f4e5ed3afc99e5cc24320406022c5b2fdc47cd992feb09a2.scope: Deactivated successfully.
Dec 13 02:14:49 np0005558317 conmon[91438]: conmon a7528cbca6354316f4e5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a7528cbca6354316f4e5ed3afc99e5cc24320406022c5b2fdc47cd992feb09a2.scope/container/memory.events
Dec 13 02:14:49 np0005558317 podman[91395]: 2025-12-13 07:14:49.684587152 +0000 UTC m=+0.317837851 container died a7528cbca6354316f4e5ed3afc99e5cc24320406022c5b2fdc47cd992feb09a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:49 np0005558317 podman[91395]: 2025-12-13 07:14:49.706445385 +0000 UTC m=+0.339696084 container remove a7528cbca6354316f4e5ed3afc99e5cc24320406022c5b2fdc47cd992feb09a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 02:14:49 np0005558317 systemd[1]: libpod-conmon-a7528cbca6354316f4e5ed3afc99e5cc24320406022c5b2fdc47cd992feb09a2.scope: Deactivated successfully.
Dec 13 02:14:49 np0005558317 systemd[1]: var-lib-containers-storage-overlay-da0dbe1ffe5bff55074ff138bb3a603334c9a6ced2cc89f9310552e1d8f91ccf-merged.mount: Deactivated successfully.
Dec 13 02:14:49 np0005558317 python3[91574]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:49 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Dec 13 02:14:49 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Dec 13 02:14:49 np0005558317 podman[91575]: 2025-12-13 07:14:49.975600776 +0000 UTC m=+0.029502571 container create 7ea71ea4cd112f3bfdff2517b04d88dba8d9b5daa0e449b771816d69277bb564 (image=quay.io/ceph/ceph:v20, name=stoic_kalam, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 02:14:50 np0005558317 systemd[1]: Started libpod-conmon-7ea71ea4cd112f3bfdff2517b04d88dba8d9b5daa0e449b771816d69277bb564.scope.
Dec 13 02:14:50 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:50 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d748c88a04e6ca1e5b943c9204d535237a8b0613bd1364e789b052b7b828d0d0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:50 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d748c88a04e6ca1e5b943c9204d535237a8b0613bd1364e789b052b7b828d0d0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:50 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d748c88a04e6ca1e5b943c9204d535237a8b0613bd1364e789b052b7b828d0d0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:50 np0005558317 podman[91575]: 2025-12-13 07:14:50.035534883 +0000 UTC m=+0.089436698 container init 7ea71ea4cd112f3bfdff2517b04d88dba8d9b5daa0e449b771816d69277bb564 (image=quay.io/ceph/ceph:v20, name=stoic_kalam, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 02:14:50 np0005558317 podman[91575]: 2025-12-13 07:14:50.040838962 +0000 UTC m=+0.094740757 container start 7ea71ea4cd112f3bfdff2517b04d88dba8d9b5daa0e449b771816d69277bb564 (image=quay.io/ceph/ceph:v20, name=stoic_kalam, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:50 np0005558317 podman[91575]: 2025-12-13 07:14:50.042168701 +0000 UTC m=+0.096070496 container attach 7ea71ea4cd112f3bfdff2517b04d88dba8d9b5daa0e449b771816d69277bb564 (image=quay.io/ceph/ceph:v20, name=stoic_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:50 np0005558317 podman[91599]: 2025-12-13 07:14:50.043944028 +0000 UTC m=+0.030224788 container create 12e5017ca8327a586caa02e46db69069477ba24dba9de2e95b9d84d727c787e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Dec 13 02:14:50 np0005558317 podman[91575]: 2025-12-13 07:14:49.964137841 +0000 UTC m=+0.018039646 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:50 np0005558317 systemd[1]: Started libpod-conmon-12e5017ca8327a586caa02e46db69069477ba24dba9de2e95b9d84d727c787e1.scope.
Dec 13 02:14:50 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:50 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:50 np0005558317 podman[91599]: 2025-12-13 07:14:50.095214842 +0000 UTC m=+0.081495592 container init 12e5017ca8327a586caa02e46db69069477ba24dba9de2e95b9d84d727c787e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_solomon, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 02:14:50 np0005558317 podman[91599]: 2025-12-13 07:14:50.099913473 +0000 UTC m=+0.086194224 container start 12e5017ca8327a586caa02e46db69069477ba24dba9de2e95b9d84d727c787e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_solomon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 02:14:50 np0005558317 podman[91599]: 2025-12-13 07:14:50.101035272 +0000 UTC m=+0.087316021 container attach 12e5017ca8327a586caa02e46db69069477ba24dba9de2e95b9d84d727c787e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_solomon, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:50 np0005558317 elastic_solomon[91615]: 167 167
Dec 13 02:14:50 np0005558317 systemd[1]: libpod-12e5017ca8327a586caa02e46db69069477ba24dba9de2e95b9d84d727c787e1.scope: Deactivated successfully.
Dec 13 02:14:50 np0005558317 conmon[91615]: conmon 12e5017ca8327a586caa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-12e5017ca8327a586caa02e46db69069477ba24dba9de2e95b9d84d727c787e1.scope/container/memory.events
Dec 13 02:14:50 np0005558317 podman[91599]: 2025-12-13 07:14:50.102861655 +0000 UTC m=+0.089142405 container died 12e5017ca8327a586caa02e46db69069477ba24dba9de2e95b9d84d727c787e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_solomon, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:50 np0005558317 systemd[1]: var-lib-containers-storage-overlay-3c3382e429be7f9b0ce32109956b2b492d7a78905d28b6aa4f76d201dbc3a013-merged.mount: Deactivated successfully.
Dec 13 02:14:50 np0005558317 podman[91599]: 2025-12-13 07:14:50.125665434 +0000 UTC m=+0.111946185 container remove 12e5017ca8327a586caa02e46db69069477ba24dba9de2e95b9d84d727c787e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_solomon, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:50 np0005558317 podman[91599]: 2025-12-13 07:14:50.033888458 +0000 UTC m=+0.020169229 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:50 np0005558317 systemd[1]: libpod-conmon-12e5017ca8327a586caa02e46db69069477ba24dba9de2e95b9d84d727c787e1.scope: Deactivated successfully.
Dec 13 02:14:50 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v58: 193 pgs: 162 active+clean, 31 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:14:50 np0005558317 podman[91655]: 2025-12-13 07:14:50.24309245 +0000 UTC m=+0.029021506 container create b69e737569610004c428347db011ac4714335146a9756b8a895d926757cdb87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_raman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:50 np0005558317 systemd[1]: Started libpod-conmon-b69e737569610004c428347db011ac4714335146a9756b8a895d926757cdb87a.scope.
Dec 13 02:14:50 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:50 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69c5d4e7a4357fc103490dca075f8df7d7202fb26e9daf84f876e25b78a91b6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:50 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69c5d4e7a4357fc103490dca075f8df7d7202fb26e9daf84f876e25b78a91b6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:50 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69c5d4e7a4357fc103490dca075f8df7d7202fb26e9daf84f876e25b78a91b6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:50 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69c5d4e7a4357fc103490dca075f8df7d7202fb26e9daf84f876e25b78a91b6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:50 np0005558317 podman[91655]: 2025-12-13 07:14:50.303037036 +0000 UTC m=+0.088966112 container init b69e737569610004c428347db011ac4714335146a9756b8a895d926757cdb87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_raman, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:50 np0005558317 podman[91655]: 2025-12-13 07:14:50.307542785 +0000 UTC m=+0.093471841 container start b69e737569610004c428347db011ac4714335146a9756b8a895d926757cdb87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_raman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:50 np0005558317 podman[91655]: 2025-12-13 07:14:50.308824515 +0000 UTC m=+0.094753591 container attach b69e737569610004c428347db011ac4714335146a9756b8a895d926757cdb87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 02:14:50 np0005558317 podman[91655]: 2025-12-13 07:14:50.230937314 +0000 UTC m=+0.016866390 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:50 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:14:50 np0005558317 ceph-mgr[75200]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 13 02:14:50 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Dec 13 02:14:50 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Dec 13 02:14:50 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Dec 13 02:14:50 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Dec 13 02:14:50 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Dec 13 02:14:50 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Dec 13 02:14:50 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec 13 02:14:50 np0005558317 ceph-mon[74928]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 13 02:14:50 np0005558317 ceph-mon[74928]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 13 02:14:50 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0[74924]: 2025-12-13T07:14:50.409+0000 7fa7b36e9640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 13 02:14:50 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 13 02:14:50 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).mds e2 new map
Dec 13 02:14:50 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2025-12-13T07:14:50:410071+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-13T07:14:50.409831+0000#012modified#0112025-12-13T07:14:50.409831+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Dec 13 02:14:50 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Dec 13 02:14:50 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Dec 13 02:14:50 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec 13 02:14:50 np0005558317 ceph-mgr[75200]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec 13 02:14:50 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec 13 02:14:50 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 13 02:14:50 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:50 np0005558317 ceph-mgr[75200]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 13 02:14:50 np0005558317 systemd[1]: libpod-7ea71ea4cd112f3bfdff2517b04d88dba8d9b5daa0e449b771816d69277bb564.scope: Deactivated successfully.
Dec 13 02:14:50 np0005558317 conmon[91597]: conmon 7ea71ea4cd112f3bfdff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7ea71ea4cd112f3bfdff2517b04d88dba8d9b5daa0e449b771816d69277bb564.scope/container/memory.events
Dec 13 02:14:50 np0005558317 podman[91575]: 2025-12-13 07:14:50.440945327 +0000 UTC m=+0.494847132 container died 7ea71ea4cd112f3bfdff2517b04d88dba8d9b5daa0e449b771816d69277bb564 (image=quay.io/ceph/ceph:v20, name=stoic_kalam, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:50 np0005558317 podman[91575]: 2025-12-13 07:14:50.459563791 +0000 UTC m=+0.513465586 container remove 7ea71ea4cd112f3bfdff2517b04d88dba8d9b5daa0e449b771816d69277bb564 (image=quay.io/ceph/ceph:v20, name=stoic_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:50 np0005558317 systemd[1]: libpod-conmon-7ea71ea4cd112f3bfdff2517b04d88dba8d9b5daa0e449b771816d69277bb564.scope: Deactivated successfully.
Dec 13 02:14:50 np0005558317 python3[91731]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:50 np0005558317 podman[91765]: 2025-12-13 07:14:50.763348138 +0000 UTC m=+0.036958003 container create 39827d121e52975f63fbcbb9c99fee8bcaead78cc6af8f45014b8b77e88865a1 (image=quay.io/ceph/ceph:v20, name=nostalgic_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 02:14:50 np0005558317 systemd[1]: Started libpod-conmon-39827d121e52975f63fbcbb9c99fee8bcaead78cc6af8f45014b8b77e88865a1.scope.
Dec 13 02:14:50 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:50 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc2677f5f39c827fcce2f85f380458faabd7218088062441ff81f1bb50525e4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:50 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc2677f5f39c827fcce2f85f380458faabd7218088062441ff81f1bb50525e4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:50 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc2677f5f39c827fcce2f85f380458faabd7218088062441ff81f1bb50525e4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:50 np0005558317 podman[91765]: 2025-12-13 07:14:50.830660762 +0000 UTC m=+0.104270628 container init 39827d121e52975f63fbcbb9c99fee8bcaead78cc6af8f45014b8b77e88865a1 (image=quay.io/ceph/ceph:v20, name=nostalgic_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 02:14:50 np0005558317 systemd[1]: var-lib-containers-storage-overlay-d748c88a04e6ca1e5b943c9204d535237a8b0613bd1364e789b052b7b828d0d0-merged.mount: Deactivated successfully.
Dec 13 02:14:50 np0005558317 podman[91765]: 2025-12-13 07:14:50.838803758 +0000 UTC m=+0.112413623 container start 39827d121e52975f63fbcbb9c99fee8bcaead78cc6af8f45014b8b77e88865a1 (image=quay.io/ceph/ceph:v20, name=nostalgic_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:50 np0005558317 podman[91765]: 2025-12-13 07:14:50.840359221 +0000 UTC m=+0.113969106 container attach 39827d121e52975f63fbcbb9c99fee8bcaead78cc6af8f45014b8b77e88865a1 (image=quay.io/ceph/ceph:v20, name=nostalgic_diffie, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 02:14:50 np0005558317 podman[91765]: 2025-12-13 07:14:50.750666972 +0000 UTC m=+0.024276859 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:50 np0005558317 lvm[91796]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:14:50 np0005558317 lvm[91796]: VG ceph_vg0 finished
Dec 13 02:14:50 np0005558317 lvm[91799]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:14:50 np0005558317 lvm[91799]: VG ceph_vg1 finished
Dec 13 02:14:50 np0005558317 lvm[91802]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:14:50 np0005558317 lvm[91802]: VG ceph_vg2 finished
Dec 13 02:14:50 np0005558317 pensive_raman[91668]: {}
Dec 13 02:14:50 np0005558317 lvm[91805]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:14:50 np0005558317 lvm[91805]: VG ceph_vg0 finished
Dec 13 02:14:50 np0005558317 systemd[1]: libpod-b69e737569610004c428347db011ac4714335146a9756b8a895d926757cdb87a.scope: Deactivated successfully.
Dec 13 02:14:50 np0005558317 podman[91825]: 2025-12-13 07:14:50.966722692 +0000 UTC m=+0.017414682 container died b69e737569610004c428347db011ac4714335146a9756b8a895d926757cdb87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_raman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:50 np0005558317 systemd[1]: var-lib-containers-storage-overlay-69c5d4e7a4357fc103490dca075f8df7d7202fb26e9daf84f876e25b78a91b6a-merged.mount: Deactivated successfully.
Dec 13 02:14:50 np0005558317 podman[91825]: 2025-12-13 07:14:50.991277353 +0000 UTC m=+0.041969322 container remove b69e737569610004c428347db011ac4714335146a9756b8a895d926757cdb87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 02:14:50 np0005558317 systemd[1]: libpod-conmon-b69e737569610004c428347db011ac4714335146a9756b8a895d926757cdb87a.scope: Deactivated successfully.
Dec 13 02:14:51 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:14:51 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:51 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:14:51 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:51 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Dec 13 02:14:51 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Dec 13 02:14:51 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Dec 13 02:14:51 np0005558317 ceph-mon[74928]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 13 02:14:51 np0005558317 ceph-mon[74928]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 13 02:14:51 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 13 02:14:51 np0005558317 ceph-mon[74928]: Saving service mds.cephfs spec with placement compute-0
Dec 13 02:14:51 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:51 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:51 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:51 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:14:51 np0005558317 ceph-mgr[75200]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec 13 02:14:51 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec 13 02:14:51 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 13 02:14:51 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:51 np0005558317 nostalgic_diffie[91787]: Scheduled mds.cephfs update...
Dec 13 02:14:51 np0005558317 systemd[1]: libpod-39827d121e52975f63fbcbb9c99fee8bcaead78cc6af8f45014b8b77e88865a1.scope: Deactivated successfully.
Dec 13 02:14:51 np0005558317 conmon[91787]: conmon 39827d121e52975f63fb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-39827d121e52975f63fbcbb9c99fee8bcaead78cc6af8f45014b8b77e88865a1.scope/container/memory.events
Dec 13 02:14:51 np0005558317 podman[91765]: 2025-12-13 07:14:51.204138609 +0000 UTC m=+0.477748475 container died 39827d121e52975f63fbcbb9c99fee8bcaead78cc6af8f45014b8b77e88865a1 (image=quay.io/ceph/ceph:v20, name=nostalgic_diffie, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 02:14:51 np0005558317 systemd[1]: var-lib-containers-storage-overlay-4fc2677f5f39c827fcce2f85f380458faabd7218088062441ff81f1bb50525e4-merged.mount: Deactivated successfully.
Dec 13 02:14:51 np0005558317 podman[91765]: 2025-12-13 07:14:51.225411181 +0000 UTC m=+0.499021048 container remove 39827d121e52975f63fbcbb9c99fee8bcaead78cc6af8f45014b8b77e88865a1 (image=quay.io/ceph/ceph:v20, name=nostalgic_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 02:14:51 np0005558317 systemd[1]: libpod-conmon-39827d121e52975f63fbcbb9c99fee8bcaead78cc6af8f45014b8b77e88865a1.scope: Deactivated successfully.
Dec 13 02:14:51 np0005558317 podman[91957]: 2025-12-13 07:14:51.490608964 +0000 UTC m=+0.038604690 container exec 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:51 np0005558317 podman[92025]: 2025-12-13 07:14:51.625632151 +0000 UTC m=+0.048108230 container exec_died 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 02:14:51 np0005558317 podman[91957]: 2025-12-13 07:14:51.629411115 +0000 UTC m=+0.177406840 container exec_died 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:51 np0005558317 python3[92059]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 02:14:52 np0005558317 python3[92203]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765610091.5224814-37066-154860606618398/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=89bb88aee4825eacb5f29faabebd795dc909bcd4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: Saving service mds.cephfs spec with placement compute-0
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:52 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v60: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:14:52 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Dec 13 02:14:52 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Dec 13 02:14:52 np0005558317 python3[92333]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:52 np0005558317 podman[92347]: 2025-12-13 07:14:52.431501494 +0000 UTC m=+0.029999895 container create 432383f4ffddb21de4db6d50bb1d33c5215904a4f7aaa232d6c385f1ce96d097 (image=quay.io/ceph/ceph:v20, name=cranky_wescoff, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 02:14:52 np0005558317 systemd[1]: Started libpod-conmon-432383f4ffddb21de4db6d50bb1d33c5215904a4f7aaa232d6c385f1ce96d097.scope.
Dec 13 02:14:52 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:52 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6251b333b72a3e7080263b4581892cb66666b9bd4d648f45326703b2e0fa703/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:52 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6251b333b72a3e7080263b4581892cb66666b9bd4d648f45326703b2e0fa703/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:52 np0005558317 podman[92347]: 2025-12-13 07:14:52.484499314 +0000 UTC m=+0.082997736 container init 432383f4ffddb21de4db6d50bb1d33c5215904a4f7aaa232d6c385f1ce96d097 (image=quay.io/ceph/ceph:v20, name=cranky_wescoff, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 02:14:52 np0005558317 podman[92347]: 2025-12-13 07:14:52.495491494 +0000 UTC m=+0.093989906 container start 432383f4ffddb21de4db6d50bb1d33c5215904a4f7aaa232d6c385f1ce96d097 (image=quay.io/ceph/ceph:v20, name=cranky_wescoff, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:52 np0005558317 podman[92347]: 2025-12-13 07:14:52.50088447 +0000 UTC m=+0.099382873 container attach 432383f4ffddb21de4db6d50bb1d33c5215904a4f7aaa232d6c385f1ce96d097 (image=quay.io/ceph/ceph:v20, name=cranky_wescoff, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 02:14:52 np0005558317 podman[92347]: 2025-12-13 07:14:52.419810721 +0000 UTC m=+0.018309143 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:14:52 np0005558317 podman[92457]: 2025-12-13 07:14:52.848247924 +0000 UTC m=+0.027689043 container create 8e43e84ceaedfe5e05e5123c5dca19efdfadb1c0b09a65e0d94f5f233d6251d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 02:14:52 np0005558317 systemd[1]: Started libpod-conmon-8e43e84ceaedfe5e05e5123c5dca19efdfadb1c0b09a65e0d94f5f233d6251d0.scope.
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4247453847' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Dec 13 02:14:52 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4247453847' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 13 02:14:52 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:52 np0005558317 podman[92457]: 2025-12-13 07:14:52.896562853 +0000 UTC m=+0.076003982 container init 8e43e84ceaedfe5e05e5123c5dca19efdfadb1c0b09a65e0d94f5f233d6251d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:52 np0005558317 podman[92457]: 2025-12-13 07:14:52.900611723 +0000 UTC m=+0.080052852 container start 8e43e84ceaedfe5e05e5123c5dca19efdfadb1c0b09a65e0d94f5f233d6251d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 02:14:52 np0005558317 podman[92457]: 2025-12-13 07:14:52.902487808 +0000 UTC m=+0.081928947 container attach 8e43e84ceaedfe5e05e5123c5dca19efdfadb1c0b09a65e0d94f5f233d6251d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 02:14:52 np0005558317 systemd[1]: libpod-432383f4ffddb21de4db6d50bb1d33c5215904a4f7aaa232d6c385f1ce96d097.scope: Deactivated successfully.
Dec 13 02:14:52 np0005558317 fervent_shamir[92470]: 167 167
Dec 13 02:14:52 np0005558317 systemd[1]: libpod-8e43e84ceaedfe5e05e5123c5dca19efdfadb1c0b09a65e0d94f5f233d6251d0.scope: Deactivated successfully.
Dec 13 02:14:52 np0005558317 conmon[92362]: conmon 432383f4ffddb21de4db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-432383f4ffddb21de4db6d50bb1d33c5215904a4f7aaa232d6c385f1ce96d097.scope/container/memory.events
Dec 13 02:14:52 np0005558317 podman[92347]: 2025-12-13 07:14:52.905685529 +0000 UTC m=+0.504183930 container died 432383f4ffddb21de4db6d50bb1d33c5215904a4f7aaa232d6c385f1ce96d097 (image=quay.io/ceph/ceph:v20, name=cranky_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:52 np0005558317 conmon[92470]: conmon 8e43e84ceaedfe5e05e5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e43e84ceaedfe5e05e5123c5dca19efdfadb1c0b09a65e0d94f5f233d6251d0.scope/container/memory.events
Dec 13 02:14:52 np0005558317 podman[92457]: 2025-12-13 07:14:52.907672844 +0000 UTC m=+0.087113973 container died 8e43e84ceaedfe5e05e5123c5dca19efdfadb1c0b09a65e0d94f5f233d6251d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:52 np0005558317 systemd[1]: var-lib-containers-storage-overlay-97bf723da73f1ffe2703cf97315ade64e934dd5a5564e3081d6e31bc50d72150-merged.mount: Deactivated successfully.
Dec 13 02:14:52 np0005558317 systemd[1]: var-lib-containers-storage-overlay-f6251b333b72a3e7080263b4581892cb66666b9bd4d648f45326703b2e0fa703-merged.mount: Deactivated successfully.
Dec 13 02:14:52 np0005558317 podman[92457]: 2025-12-13 07:14:52.933475531 +0000 UTC m=+0.112916660 container remove 8e43e84ceaedfe5e05e5123c5dca19efdfadb1c0b09a65e0d94f5f233d6251d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 02:14:52 np0005558317 podman[92457]: 2025-12-13 07:14:52.836275582 +0000 UTC m=+0.015716731 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:52 np0005558317 podman[92347]: 2025-12-13 07:14:52.938603458 +0000 UTC m=+0.537101860 container remove 432383f4ffddb21de4db6d50bb1d33c5215904a4f7aaa232d6c385f1ce96d097 (image=quay.io/ceph/ceph:v20, name=cranky_wescoff, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:52 np0005558317 systemd[1]: libpod-conmon-432383f4ffddb21de4db6d50bb1d33c5215904a4f7aaa232d6c385f1ce96d097.scope: Deactivated successfully.
Dec 13 02:14:52 np0005558317 systemd[1]: libpod-conmon-8e43e84ceaedfe5e05e5123c5dca19efdfadb1c0b09a65e0d94f5f233d6251d0.scope: Deactivated successfully.
Dec 13 02:14:53 np0005558317 podman[92502]: 2025-12-13 07:14:53.052908607 +0000 UTC m=+0.028296425 container create e03f1c73fbef2eaa47cb26e19a1c44de0ba91db77530d371697a286558d61096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ptolemy, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:53 np0005558317 systemd[1]: Started libpod-conmon-e03f1c73fbef2eaa47cb26e19a1c44de0ba91db77530d371697a286558d61096.scope.
Dec 13 02:14:53 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:53 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4214272df158ed85bd6c6b696971883a186c28eab5d769d9956596244d114aaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:53 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4214272df158ed85bd6c6b696971883a186c28eab5d769d9956596244d114aaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:53 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4214272df158ed85bd6c6b696971883a186c28eab5d769d9956596244d114aaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:53 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4214272df158ed85bd6c6b696971883a186c28eab5d769d9956596244d114aaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:53 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4214272df158ed85bd6c6b696971883a186c28eab5d769d9956596244d114aaa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:53 np0005558317 podman[92502]: 2025-12-13 07:14:53.120679633 +0000 UTC m=+0.096067441 container init e03f1c73fbef2eaa47cb26e19a1c44de0ba91db77530d371697a286558d61096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Dec 13 02:14:53 np0005558317 podman[92502]: 2025-12-13 07:14:53.126050167 +0000 UTC m=+0.101437985 container start e03f1c73fbef2eaa47cb26e19a1c44de0ba91db77530d371697a286558d61096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:53 np0005558317 podman[92502]: 2025-12-13 07:14:53.127313351 +0000 UTC m=+0.102701159 container attach e03f1c73fbef2eaa47cb26e19a1c44de0ba91db77530d371697a286558d61096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ptolemy, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 02:14:53 np0005558317 podman[92502]: 2025-12-13 07:14:53.041045169 +0000 UTC m=+0.016432997 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:53 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec 13 02:14:53 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:14:53 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:14:53 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:14:53 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:14:53 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:14:53 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:14:53 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.1b( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271790504s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968959808s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.1d( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.288320541s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.985500336s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.1b( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271747589s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968959808s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.1d( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.288269043s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.985500336s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.1e( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287899971s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.985153198s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.1e( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287873268s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.985153198s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.17( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271297455s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968765259s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.19( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271382332s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968856812s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.17( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271284103s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968765259s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.18( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271527290s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.969051361s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.19( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271353722s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968856812s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.18( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271512985s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.969051361s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.16( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271117210s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968692780s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.11( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.288145065s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.985740662s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.11( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.288131714s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.985740662s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.15( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271201134s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968852997s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.16( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271050453s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968692780s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.15( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271189690s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968852997s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.12( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.288107872s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.985786438s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.12( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.288093567s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.985786438s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.13( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.288252831s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.985961914s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[5.1e( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[5.19( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[5.18( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[5.1a( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[2.19( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[2.18( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[5.7( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[2.1d( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[5.4( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[2.1c( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[2.f( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[5.5( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[2.2( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.13( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.288240433s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.985961914s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.13( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.270836830s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968608856s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.14( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287993431s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.985763550s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[2.1f( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[5.2( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[5.3( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[2.b( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[2.8( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[2.16( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[5.1d( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[5.c( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[5.f( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[2.9( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[2.6( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[5.1( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[2.7( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[2.4( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[2.5( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[5.15( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[2.3( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[2.a( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[2.d( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[5.9( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[5.16( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[2.15( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[5.12( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[5.13( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[5.14( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[2.17( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[2.13( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[2.11( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.18( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.266854286s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.958225250s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[5.11( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.18( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.266841888s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.958225250s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.15( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284462929s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.976142883s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.15( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284450531s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.976142883s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.13( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.266247749s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.958084106s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.13( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.266237259s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.958084106s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.11( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284382820s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.976264954s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.11( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284361839s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.976264954s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.12( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.266141891s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.958072662s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.12( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.266132355s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.958072662s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.11( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.266028404s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.958084106s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.13( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284206390s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.976264954s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.11( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.266015053s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.958084106s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.13( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284194946s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.976264954s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.10( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265874863s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.958015442s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.10( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265864372s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.958015442s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.f( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.266007423s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.958179474s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.f( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265996933s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.958179474s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.d( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284914017s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977138519s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.d( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284904480s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977138519s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.e( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265877724s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.958160400s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.c( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284804344s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977115631s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.e( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265867233s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.958160400s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.c( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284794807s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977115631s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.d( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265625954s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.958011627s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.d( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265614510s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.958011627s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.f( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284768105s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977180481s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.f( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284757614s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977180481s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.e( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284683228s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977157593s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.e( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284672737s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977157593s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.2( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284670830s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977180481s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.2( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284660339s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977180481s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.2( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265351295s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.957958221s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.2( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265339851s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.957958221s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.1( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265258789s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.957885742s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.1( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265247345s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.957885742s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.1( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284502029s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977214813s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.1( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284492493s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977214813s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.4( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265030861s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.957862854s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.4( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265020370s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.957862854s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.17( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.283348083s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.976219177s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.6( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284305573s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977222443s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.17( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.283327103s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.976219177s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.6( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284295082s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977222443s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.9( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.264868736s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.957881927s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.b( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284204483s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977230072s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.9( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.264857292s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.957881927s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.b( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284193993s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977230072s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.1a( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.264738083s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.957839966s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.13( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.270820618s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968608856s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[2.1b( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.15( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287983894s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.985794067s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.14( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287969589s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.985763550s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.15( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287973404s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.985794067s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.11( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.270705223s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968540192s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.11( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.270693779s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968540192s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.16( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.288050652s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.985950470s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.16( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.288040161s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.985950470s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.d( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271038055s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.969043732s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.9( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.288059235s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.986072540s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.d( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271027565s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.969043732s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.9( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.288045883s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.986072540s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.f( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.270983696s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.969047546s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.f( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.270970345s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.969047546s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.7( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287842751s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.985996246s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.7( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.270167351s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968326569s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.7( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287830353s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.985996246s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.7( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.270154953s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968326569s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.5( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287767410s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.986015320s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.3( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269947052s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968212128s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.5( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287756920s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.986015320s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.3( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269937515s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968212128s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.4( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287714005s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.986038208s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.4( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287701607s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.986038208s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.4( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269869804s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968242645s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.3( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287649155s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.986038208s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.3( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287636757s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.986038208s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.5( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269750595s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968173981s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.4( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269803047s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968242645s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.2( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287533760s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.986038208s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.2( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287524223s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.986038208s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.5( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269651413s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968173981s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.6( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269672394s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968231201s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.6( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269659996s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968231201s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.1( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287481308s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.986087799s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.1( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287470818s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.986087799s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.8( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269401550s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968040466s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.8( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269389153s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968040466s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.f( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287393570s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.986053467s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.f( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287382126s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.986053467s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.9( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269335747s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968025208s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.9( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269327164s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968025208s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.a( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269232750s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.967975616s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.a( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269223213s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.967975616s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.b( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.268044472s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.966827393s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.c( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287295341s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.986083984s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.b( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.268028259s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.966827393s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.c( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287283897s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.986083984s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.1c( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.268096924s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.966949463s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.1c( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.268087387s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.966949463s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.1d( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.267945290s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.966838837s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.1a( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287194252s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.986091614s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.1a( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287183762s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.986091614s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.1d( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.267930984s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.966838837s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.19( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287652969s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.986606598s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.19( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287644386s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.986606598s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.1f( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.267802238s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.966815948s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.1f( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.267787933s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.966815948s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.18( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287344933s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.986442566s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.18( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287312508s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.986442566s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.2( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269187927s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968326569s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.2( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269170761s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968326569s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.1a( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.264728546s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.957839966s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.5( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.264863968s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.957996368s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.5( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.264854431s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.957996368s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.a( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.263738632s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.956951141s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.8( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284156799s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977390289s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.a( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.263729095s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.956951141s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.8( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284144402s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977390289s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.1b( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.264417648s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.957714081s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.1b( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.264406204s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.957714081s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.4( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.283899307s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977249146s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.4( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.283886909s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977249146s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.7( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.263356209s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.956760406s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.7( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.263347626s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.956760406s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.8( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.263430595s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.956874847s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.8( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.263421059s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.956874847s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.1e( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.283769608s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977279663s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.1e( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.283742905s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977279663s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.1f( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.283679008s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977294922s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.14( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.264464378s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.958095551s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[4.12( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[4.10( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[4.f( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[6.d( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[6.c( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[4.d( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[6.e( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[6.2( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[4.2( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[6.1( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[4.4( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[6.17( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[6.6( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[4.9( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[6.b( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.1f( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.283666611s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977294922s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.14( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.264445305s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.958095551s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.1c( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.283563614s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977302551s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.1d( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.283623695s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977371216s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[4.5( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.1c( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.283553123s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977302551s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[6.4( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.1d( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.283612251s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977371216s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.1c( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.263423920s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.956939697s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.1c( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.263049126s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.956939697s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[4.7( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[4.8( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[6.1e( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[4.14( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[6.1c( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[6.1d( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.1c( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.292141914s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842533112s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.1c( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.292124748s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842533112s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.18( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.257698059s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808376312s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.18( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.257686615s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808376312s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.13( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.291768074s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842552185s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.13( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.291752815s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842552185s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[4.18( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[6.15( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[4.13( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.14( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.278313637s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.976139069s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.14( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.278292656s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.976139069s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.16( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.257464409s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808376312s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.16( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.257449150s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808376312s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.15( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.257287025s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808353424s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.15( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.257276535s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808353424s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.11( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.291619301s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842761993s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.11( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.291609764s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842761993s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.12( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.257050514s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808311462s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.12( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.257040977s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808311462s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.11( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.257040977s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808380127s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.11( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.257032394s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808380127s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.15( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.291224480s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842632294s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.15( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.291180611s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842632294s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.f( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.256767273s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808307648s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.f( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.256759644s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808307648s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.e( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.256684303s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808303833s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.e( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.256677628s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808303833s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.a( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290964127s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842658997s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.a( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290955544s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842658997s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.9( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290888786s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842670441s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.9( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290882111s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842670441s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.c( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.256412506s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808273315s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.c( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.256403923s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808273315s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.8( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290736198s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842674255s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.8( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290728569s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842674255s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.f( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290653229s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842685699s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[6.11( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[6.13( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[4.11( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[4.e( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[6.f( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[4.1( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[4.1a( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[4.a( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[6.8( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[4.1b( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[6.1f( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[4.1c( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[6.14( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.f( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290645599s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842685699s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.6( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290569305s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842693329s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.6( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290561676s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842693329s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.4( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290488243s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842708588s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.4( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290479660s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842708588s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.1( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.255922318s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808227539s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.1( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.255914688s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808227539s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.5( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290238380s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842723846s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.5( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290227890s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842723846s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.3( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.255731583s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808303833s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.3( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.255723000s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808303833s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.5( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.255573273s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808235168s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.5( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.255566597s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808235168s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.1( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289997101s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842739105s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.1( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289990425s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842739105s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.6( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.255416870s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808231354s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.6( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.255409241s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808231354s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.2( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289930344s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842807770s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.2( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289922714s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842807770s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.7( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.253518105s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.806476593s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.7( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.253510475s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.806476593s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.3( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289686203s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842742920s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.3( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289677620s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842742920s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.8( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252987862s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.806118011s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.8( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252979279s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.806118011s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.c( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289563179s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842761993s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.c( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289554596s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842761993s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.9( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252856255s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.806114197s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.9( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252847672s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.806114197s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.a( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.253200531s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.806533813s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.a( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.253192902s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.806533813s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.e( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289352417s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842777252s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.e( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289342880s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842777252s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.1f( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289284706s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842792511s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.1f( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289276123s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842792511s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.1b( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252532959s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.806114197s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.1b( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252525330s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.806114197s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.18( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289145470s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842792511s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.1d( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252443314s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.806095123s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.18( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289132118s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842792511s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.1d( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252432823s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.806095123s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.1a( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289058685s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842807770s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.1e( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252333641s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.806098938s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.1e( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252324104s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.806098938s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.1a( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289048195s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842807770s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.1b( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.288977623s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842823029s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.1b( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.288969040s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842823029s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.1f( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252244949s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.806110382s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.1f( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252229691s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.806110382s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.17( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252530098s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808372498s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.17( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252507210s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808372498s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:14:53 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:14:53 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:14:53 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:14:53 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:14:53 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:14:53 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:14:53 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:14:53 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:53 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:14:53 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/4247453847' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Dec 13 02:14:53 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/4247453847' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[7.13( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[3.15( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[3.12( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[3.f( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[7.9( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[3.c( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[7.f( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[7.6( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[7.4( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[3.1( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[3.3( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[3.6( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[7.3( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[3.9( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[3.a( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[7.1f( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[3.1b( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[7.18( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[7.1b( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[3.1f( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[3.17( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[7.1c( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[3.18( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[3.16( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[7.11( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[3.11( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[7.15( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[3.e( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[7.a( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[7.8( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[7.5( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[3.5( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[7.1( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[7.2( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[3.7( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[3.8( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[7.c( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[7.e( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[3.1d( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[3.1e( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[7.1a( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:53 np0005558317 nice_ptolemy[92516]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:14:53 np0005558317 nice_ptolemy[92516]: --> All data devices are unavailable
Dec 13 02:14:53 np0005558317 python3[92553]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:53 np0005558317 systemd[1]: libpod-e03f1c73fbef2eaa47cb26e19a1c44de0ba91db77530d371697a286558d61096.scope: Deactivated successfully.
Dec 13 02:14:53 np0005558317 podman[92502]: 2025-12-13 07:14:53.51609433 +0000 UTC m=+0.491482158 container died e03f1c73fbef2eaa47cb26e19a1c44de0ba91db77530d371697a286558d61096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ptolemy, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 02:14:53 np0005558317 systemd[1]: var-lib-containers-storage-overlay-4214272df158ed85bd6c6b696971883a186c28eab5d769d9956596244d114aaa-merged.mount: Deactivated successfully.
Dec 13 02:14:53 np0005558317 podman[92502]: 2025-12-13 07:14:53.53981882 +0000 UTC m=+0.515206628 container remove e03f1c73fbef2eaa47cb26e19a1c44de0ba91db77530d371697a286558d61096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 02:14:53 np0005558317 systemd[1]: libpod-conmon-e03f1c73fbef2eaa47cb26e19a1c44de0ba91db77530d371697a286558d61096.scope: Deactivated successfully.
Dec 13 02:14:53 np0005558317 podman[92563]: 2025-12-13 07:14:53.553516525 +0000 UTC m=+0.040353666 container create 4b9fac4f838135e7e15a07621a7c2287983fe10e6027eef6b3dc6fe47fdd690c (image=quay.io/ceph/ceph:v20, name=eager_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 02:14:53 np0005558317 systemd[1]: Started libpod-conmon-4b9fac4f838135e7e15a07621a7c2287983fe10e6027eef6b3dc6fe47fdd690c.scope.
Dec 13 02:14:53 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:53 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d753cdbddc33c10ec3a66780ab30219b024850be73f4c35c3d34b958efcd13/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:53 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d753cdbddc33c10ec3a66780ab30219b024850be73f4c35c3d34b958efcd13/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:53 np0005558317 podman[92563]: 2025-12-13 07:14:53.608193242 +0000 UTC m=+0.095030393 container init 4b9fac4f838135e7e15a07621a7c2287983fe10e6027eef6b3dc6fe47fdd690c (image=quay.io/ceph/ceph:v20, name=eager_agnesi, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 02:14:53 np0005558317 podman[92563]: 2025-12-13 07:14:53.612854863 +0000 UTC m=+0.099692003 container start 4b9fac4f838135e7e15a07621a7c2287983fe10e6027eef6b3dc6fe47fdd690c (image=quay.io/ceph/ceph:v20, name=eager_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:53 np0005558317 podman[92563]: 2025-12-13 07:14:53.614254163 +0000 UTC m=+0.101091304 container attach 4b9fac4f838135e7e15a07621a7c2287983fe10e6027eef6b3dc6fe47fdd690c (image=quay.io/ceph/ceph:v20, name=eager_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3)
Dec 13 02:14:53 np0005558317 podman[92563]: 2025-12-13 07:14:53.538040237 +0000 UTC m=+0.024877399 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:53 np0005558317 podman[92670]: 2025-12-13 07:14:53.899013451 +0000 UTC m=+0.028231322 container create bb2cba6456d4d48641d808596a433562222b8bf4a944c93d71ed69370f85546f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gates, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:53 np0005558317 systemd[1]: Started libpod-conmon-bb2cba6456d4d48641d808596a433562222b8bf4a944c93d71ed69370f85546f.scope.
Dec 13 02:14:53 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:53 np0005558317 podman[92670]: 2025-12-13 07:14:53.943825747 +0000 UTC m=+0.073043647 container init bb2cba6456d4d48641d808596a433562222b8bf4a944c93d71ed69370f85546f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gates, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 02:14:53 np0005558317 podman[92670]: 2025-12-13 07:14:53.947888533 +0000 UTC m=+0.077106413 container start bb2cba6456d4d48641d808596a433562222b8bf4a944c93d71ed69370f85546f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:53 np0005558317 podman[92670]: 2025-12-13 07:14:53.95002527 +0000 UTC m=+0.079243170 container attach bb2cba6456d4d48641d808596a433562222b8bf4a944c93d71ed69370f85546f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:53 np0005558317 affectionate_gates[92683]: 167 167
Dec 13 02:14:53 np0005558317 systemd[1]: libpod-bb2cba6456d4d48641d808596a433562222b8bf4a944c93d71ed69370f85546f.scope: Deactivated successfully.
Dec 13 02:14:53 np0005558317 conmon[92683]: conmon bb2cba6456d4d48641d8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bb2cba6456d4d48641d808596a433562222b8bf4a944c93d71ed69370f85546f.scope/container/memory.events
Dec 13 02:14:53 np0005558317 podman[92670]: 2025-12-13 07:14:53.952842114 +0000 UTC m=+0.082059993 container died bb2cba6456d4d48641d808596a433562222b8bf4a944c93d71ed69370f85546f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gates, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:53 np0005558317 systemd[1]: var-lib-containers-storage-overlay-04e6a91a0a6ebf93ede983b569a00a25183a219edfe71426e052bc24c6225056-merged.mount: Deactivated successfully.
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Dec 13 02:14:53 np0005558317 podman[92670]: 2025-12-13 07:14:53.972974443 +0000 UTC m=+0.102192323 container remove bb2cba6456d4d48641d808596a433562222b8bf4a944c93d71ed69370f85546f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:53 np0005558317 podman[92670]: 2025-12-13 07:14:53.887228221 +0000 UTC m=+0.016446122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:53 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Dec 13 02:14:53 np0005558317 systemd[1]: libpod-conmon-bb2cba6456d4d48641d808596a433562222b8bf4a944c93d71ed69370f85546f.scope: Deactivated successfully.
Dec 13 02:14:54 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 13 02:14:54 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/595908538' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 13 02:14:54 np0005558317 eager_agnesi[92586]: 
Dec 13 02:14:54 np0005558317 eager_agnesi[92586]: {"fsid":"00fdae1b-7fad-5f1b-8734-ba4d9298a6de","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":91,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":33,"num_osds":3,"num_up_osds":3,"osd_up_since":1765610062,"num_in_osds":3,"osd_in_since":1765610047,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83853312,"bytes_avail":64328073216,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2025-12-13T07:14:50:410071+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-13T07:14:40.194918+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"cf19f399-bc4c-48de-b8a2-1b3512091033":{"message":"Global Recovery Event (5s)\n      [=======================.....] ","progress":0.8393782377243042,"add_to_ceph_s":true}}}
Dec 13 02:14:54 np0005558317 systemd[1]: libpod-4b9fac4f838135e7e15a07621a7c2287983fe10e6027eef6b3dc6fe47fdd690c.scope: Deactivated successfully.
Dec 13 02:14:54 np0005558317 podman[92563]: 2025-12-13 07:14:54.02029373 +0000 UTC m=+0.507130870 container died 4b9fac4f838135e7e15a07621a7c2287983fe10e6027eef6b3dc6fe47fdd690c (image=quay.io/ceph/ceph:v20, name=eager_agnesi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 02:14:54 np0005558317 systemd[1]: var-lib-containers-storage-overlay-a8d753cdbddc33c10ec3a66780ab30219b024850be73f4c35c3d34b958efcd13-merged.mount: Deactivated successfully.
Dec 13 02:14:54 np0005558317 podman[92563]: 2025-12-13 07:14:54.043800932 +0000 UTC m=+0.530638073 container remove 4b9fac4f838135e7e15a07621a7c2287983fe10e6027eef6b3dc6fe47fdd690c (image=quay.io/ceph/ceph:v20, name=eager_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:54 np0005558317 systemd[1]: libpod-conmon-4b9fac4f838135e7e15a07621a7c2287983fe10e6027eef6b3dc6fe47fdd690c.scope: Deactivated successfully.
Dec 13 02:14:54 np0005558317 ceph-mgr[75200]: [progress INFO root] Completed event cf19f399-bc4c-48de-b8a2-1b3512091033 (Global Recovery Event) in 10 seconds
Dec 13 02:14:54 np0005558317 podman[92716]: 2025-12-13 07:14:54.097920992 +0000 UTC m=+0.029288780 container create 95bf82c5f8494264d3296b62a26d6610b28c4a0d4a6c8cbeff48728c64a278d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:14:54 np0005558317 systemd[1]: Started libpod-conmon-95bf82c5f8494264d3296b62a26d6610b28c4a0d4a6c8cbeff48728c64a278d5.scope.
Dec 13 02:14:54 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:54 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410aa50d47a33a22361fcbcaba30615cc1efa0350e897edf638710efa68935cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:54 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410aa50d47a33a22361fcbcaba30615cc1efa0350e897edf638710efa68935cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:54 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410aa50d47a33a22361fcbcaba30615cc1efa0350e897edf638710efa68935cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:54 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410aa50d47a33a22361fcbcaba30615cc1efa0350e897edf638710efa68935cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:54 np0005558317 podman[92716]: 2025-12-13 07:14:54.15021075 +0000 UTC m=+0.081578559 container init 95bf82c5f8494264d3296b62a26d6610b28c4a0d4a6c8cbeff48728c64a278d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_elbakyan, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 02:14:54 np0005558317 podman[92716]: 2025-12-13 07:14:54.156905793 +0000 UTC m=+0.088273593 container start 95bf82c5f8494264d3296b62a26d6610b28c4a0d4a6c8cbeff48728c64a278d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_elbakyan, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:54 np0005558317 podman[92716]: 2025-12-13 07:14:54.159665701 +0000 UTC m=+0.091033519 container attach 95bf82c5f8494264d3296b62a26d6610b28c4a0d4a6c8cbeff48728c64a278d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_elbakyan, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 02:14:54 np0005558317 podman[92716]: 2025-12-13 07:14:54.08575747 +0000 UTC m=+0.017125269 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:54 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec 13 02:14:54 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Dec 13 02:14:54 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[5.1e( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[2.f( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[2.16( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[5.15( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[2.13( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[5.14( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[2.19( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[3.1f( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[2.11( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[3.12( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[3.15( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[3.17( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[7.13( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[2.8( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[2.b( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[7.1b( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[3.a( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[3.9( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[7.f( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[7.3( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[5.3( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[3.6( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[5.2( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[2.2( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[5.5( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[2.1f( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[2.1c( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[7.6( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[5.4( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[7.9( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[3.3( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[2.1d( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[5.7( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[3.1( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[3.c( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[7.4( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[3.f( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[3.1b( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[7.1f( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[2.18( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[7.18( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v63: 193 pgs: 36 peering, 157 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[5.1d( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[4.d( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[4.f( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[6.d( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[6.2( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[4.2( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[4.4( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[6.6( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[6.4( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[6.1( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[4.7( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[4.5( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[2.d( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[4.9( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[6.b( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[5.9( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[4.8( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[5.16( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[6.17( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[4.14( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[2.15( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[5.12( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[4.12( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[2.17( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[5.13( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[5.11( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[6.c( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[6.1d( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[6.1c( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[2.1b( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[6.e( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[2.a( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[2.3( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[2.5( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[4.10( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[2.4( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[3.18( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[4.1c( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[7.1c( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[3.16( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[6.1f( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[4.11( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[6.11( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[4.13( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[7.11( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[6.13( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[6.15( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[7.15( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[6.14( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[7.a( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[3.e( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[7.8( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[3.11( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[7.5( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[7.2( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[4.1( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[7.1( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[4.a( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[6.8( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[4.e( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[3.7( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[3.8( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[7.e( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[6.f( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[7.c( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[3.1d( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[3.5( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[2.7( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[2.6( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[2.9( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[5.f( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[5.c( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[5.1( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[5.1a( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[4.1a( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[4.1b( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[4.18( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[5.18( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[5.19( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[6.1e( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[3.1e( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[7.1a( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:54 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:14:54 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:14:54 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:14:54 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:14:54 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:14:54 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:14:54 np0005558317 python3[92759]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:54 np0005558317 podman[92760]: 2025-12-13 07:14:54.341869836 +0000 UTC m=+0.029730580 container create 0fc64db874ba73da58fd174898086b6538a68e42ede29f2c41d2048cff7ab4d6 (image=quay.io/ceph/ceph:v20, name=vigilant_bhaskara, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 02:14:54 np0005558317 systemd[1]: Started libpod-conmon-0fc64db874ba73da58fd174898086b6538a68e42ede29f2c41d2048cff7ab4d6.scope.
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]: {
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:    "0": [
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:        {
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "devices": [
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "/dev/loop3"
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            ],
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "lv_name": "ceph_lv0",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "lv_size": "21470642176",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "name": "ceph_lv0",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "tags": {
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.cluster_name": "ceph",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.crush_device_class": "",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.encrypted": "0",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.objectstore": "bluestore",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.osd_id": "0",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.type": "block",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.vdo": "0",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.with_tpm": "0"
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            },
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "type": "block",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "vg_name": "ceph_vg0"
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:        }
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:    ],
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:    "1": [
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:        {
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "devices": [
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "/dev/loop4"
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            ],
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "lv_name": "ceph_lv1",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "lv_size": "21470642176",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "name": "ceph_lv1",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "tags": {
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.cluster_name": "ceph",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.crush_device_class": "",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.encrypted": "0",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.objectstore": "bluestore",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.osd_id": "1",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.type": "block",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.vdo": "0",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.with_tpm": "0"
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            },
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "type": "block",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "vg_name": "ceph_vg1"
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:        }
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:    ],
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:    "2": [
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:        {
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "devices": [
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "/dev/loop5"
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            ],
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "lv_name": "ceph_lv2",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "lv_size": "21470642176",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "name": "ceph_lv2",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "tags": {
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.cluster_name": "ceph",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.crush_device_class": "",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.encrypted": "0",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.objectstore": "bluestore",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.osd_id": "2",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.type": "block",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.vdo": "0",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:                "ceph.with_tpm": "0"
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            },
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "type": "block",
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:            "vg_name": "ceph_vg2"
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:        }
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]:    ]
Dec 13 02:14:54 np0005558317 unruffled_elbakyan[92729]: }
Dec 13 02:14:54 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:54 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e7ce76781858ffbb41eae0b615e3d8a947c34030a3da43a017e46573808d84c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:54 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e7ce76781858ffbb41eae0b615e3d8a947c34030a3da43a017e46573808d84c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:54 np0005558317 podman[92760]: 2025-12-13 07:14:54.396136471 +0000 UTC m=+0.083997235 container init 0fc64db874ba73da58fd174898086b6538a68e42ede29f2c41d2048cff7ab4d6 (image=quay.io/ceph/ceph:v20, name=vigilant_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:54 np0005558317 systemd[1]: libpod-95bf82c5f8494264d3296b62a26d6610b28c4a0d4a6c8cbeff48728c64a278d5.scope: Deactivated successfully.
Dec 13 02:14:54 np0005558317 conmon[92729]: conmon 95bf82c5f8494264d329 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-95bf82c5f8494264d3296b62a26d6610b28c4a0d4a6c8cbeff48728c64a278d5.scope/container/memory.events
Dec 13 02:14:54 np0005558317 podman[92760]: 2025-12-13 07:14:54.402336444 +0000 UTC m=+0.090197188 container start 0fc64db874ba73da58fd174898086b6538a68e42ede29f2c41d2048cff7ab4d6 (image=quay.io/ceph/ceph:v20, name=vigilant_bhaskara, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:54 np0005558317 podman[92760]: 2025-12-13 07:14:54.403630446 +0000 UTC m=+0.091491180 container attach 0fc64db874ba73da58fd174898086b6538a68e42ede29f2c41d2048cff7ab4d6 (image=quay.io/ceph/ceph:v20, name=vigilant_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:54 np0005558317 podman[92716]: 2025-12-13 07:14:54.404180179 +0000 UTC m=+0.335547979 container died 95bf82c5f8494264d3296b62a26d6610b28c4a0d4a6c8cbeff48728c64a278d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 02:14:54 np0005558317 podman[92760]: 2025-12-13 07:14:54.330904336 +0000 UTC m=+0.018765100 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:54 np0005558317 podman[92716]: 2025-12-13 07:14:54.430306124 +0000 UTC m=+0.361673923 container remove 95bf82c5f8494264d3296b62a26d6610b28c4a0d4a6c8cbeff48728c64a278d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_elbakyan, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Dec 13 02:14:54 np0005558317 systemd[1]: libpod-conmon-95bf82c5f8494264d3296b62a26d6610b28c4a0d4a6c8cbeff48728c64a278d5.scope: Deactivated successfully.
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Dec 13 02:14:54 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Dec 13 02:14:54 np0005558317 podman[92867]: 2025-12-13 07:14:54.776802988 +0000 UTC m=+0.026516218 container create 7a7307296d333ff74b40340fe94bb16c8c0d4a2210fb67f981172ca565920ad6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mestorf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:54 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 02:14:54 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1008101709' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 02:14:54 np0005558317 systemd[1]: Started libpod-conmon-7a7307296d333ff74b40340fe94bb16c8c0d4a2210fb67f981172ca565920ad6.scope.
Dec 13 02:14:54 np0005558317 vigilant_bhaskara[92776]: 
Dec 13 02:14:54 np0005558317 vigilant_bhaskara[92776]: {"epoch":1,"fsid":"00fdae1b-7fad-5f1b-8734-ba4d9298a6de","modified":"2025-12-13T07:13:19.809500Z","created":"2025-12-13T07:13:19.809500Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Dec 13 02:14:54 np0005558317 vigilant_bhaskara[92776]: dumped monmap epoch 1
Dec 13 02:14:54 np0005558317 podman[92760]: 2025-12-13 07:14:54.81183939 +0000 UTC m=+0.499700134 container died 0fc64db874ba73da58fd174898086b6538a68e42ede29f2c41d2048cff7ab4d6 (image=quay.io/ceph/ceph:v20, name=vigilant_bhaskara, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 02:14:54 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:54 np0005558317 systemd[1]: libpod-0fc64db874ba73da58fd174898086b6538a68e42ede29f2c41d2048cff7ab4d6.scope: Deactivated successfully.
Dec 13 02:14:54 np0005558317 podman[92867]: 2025-12-13 07:14:54.820957227 +0000 UTC m=+0.070670477 container init 7a7307296d333ff74b40340fe94bb16c8c0d4a2210fb67f981172ca565920ad6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 02:14:54 np0005558317 podman[92867]: 2025-12-13 07:14:54.825946855 +0000 UTC m=+0.075660085 container start 7a7307296d333ff74b40340fe94bb16c8c0d4a2210fb67f981172ca565920ad6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mestorf, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 02:14:54 np0005558317 systemd[1]: var-lib-containers-storage-overlay-3e7ce76781858ffbb41eae0b615e3d8a947c34030a3da43a017e46573808d84c-merged.mount: Deactivated successfully.
Dec 13 02:14:54 np0005558317 compassionate_mestorf[92881]: 167 167
Dec 13 02:14:54 np0005558317 systemd[1]: libpod-7a7307296d333ff74b40340fe94bb16c8c0d4a2210fb67f981172ca565920ad6.scope: Deactivated successfully.
Dec 13 02:14:54 np0005558317 podman[92867]: 2025-12-13 07:14:54.831914431 +0000 UTC m=+0.081627661 container attach 7a7307296d333ff74b40340fe94bb16c8c0d4a2210fb67f981172ca565920ad6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 02:14:54 np0005558317 podman[92867]: 2025-12-13 07:14:54.832234393 +0000 UTC m=+0.081947643 container died 7a7307296d333ff74b40340fe94bb16c8c0d4a2210fb67f981172ca565920ad6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:54 np0005558317 podman[92760]: 2025-12-13 07:14:54.837936329 +0000 UTC m=+0.525797073 container remove 0fc64db874ba73da58fd174898086b6538a68e42ede29f2c41d2048cff7ab4d6 (image=quay.io/ceph/ceph:v20, name=vigilant_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:54 np0005558317 systemd[1]: libpod-conmon-0fc64db874ba73da58fd174898086b6538a68e42ede29f2c41d2048cff7ab4d6.scope: Deactivated successfully.
Dec 13 02:14:54 np0005558317 systemd[1]: var-lib-containers-storage-overlay-62307fff621a39dbbc14a1bb8f2eed0a2bdf121fe0aef0058b081c924d87362f-merged.mount: Deactivated successfully.
Dec 13 02:14:54 np0005558317 podman[92867]: 2025-12-13 07:14:54.857963371 +0000 UTC m=+0.107676601 container remove 7a7307296d333ff74b40340fe94bb16c8c0d4a2210fb67f981172ca565920ad6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mestorf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Dec 13 02:14:54 np0005558317 podman[92867]: 2025-12-13 07:14:54.766658722 +0000 UTC m=+0.016371962 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:54 np0005558317 systemd[1]: libpod-conmon-7a7307296d333ff74b40340fe94bb16c8c0d4a2210fb67f981172ca565920ad6.scope: Deactivated successfully.
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Dec 13 02:14:54 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Dec 13 02:14:54 np0005558317 podman[92913]: 2025-12-13 07:14:54.968912481 +0000 UTC m=+0.028129231 container create b8bcb33142e5a6c09a80b7189d4963db35608773ef024ddd231c43c4d3658c83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_albattani, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:54 np0005558317 systemd[1]: Started libpod-conmon-b8bcb33142e5a6c09a80b7189d4963db35608773ef024ddd231c43c4d3658c83.scope.
Dec 13 02:14:54 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:55 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506ba23af6da1ee6784a1ebc4adea43abc07a2e956a59fe8837ec08957128472/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:55 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506ba23af6da1ee6784a1ebc4adea43abc07a2e956a59fe8837ec08957128472/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:55 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506ba23af6da1ee6784a1ebc4adea43abc07a2e956a59fe8837ec08957128472/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:55 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506ba23af6da1ee6784a1ebc4adea43abc07a2e956a59fe8837ec08957128472/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:55 np0005558317 podman[92913]: 2025-12-13 07:14:55.008562364 +0000 UTC m=+0.067779133 container init b8bcb33142e5a6c09a80b7189d4963db35608773ef024ddd231c43c4d3658c83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_albattani, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 02:14:55 np0005558317 podman[92913]: 2025-12-13 07:14:55.015064114 +0000 UTC m=+0.074280863 container start b8bcb33142e5a6c09a80b7189d4963db35608773ef024ddd231c43c4d3658c83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_albattani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:55 np0005558317 podman[92913]: 2025-12-13 07:14:55.016053653 +0000 UTC m=+0.075270403 container attach b8bcb33142e5a6c09a80b7189d4963db35608773ef024ddd231c43c4d3658c83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 02:14:55 np0005558317 podman[92913]: 2025-12-13 07:14:54.958250001 +0000 UTC m=+0.017466770 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:55 np0005558317 python3[92956]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:55 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec 13 02:14:55 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec 13 02:14:55 np0005558317 podman[92978]: 2025-12-13 07:14:55.315383788 +0000 UTC m=+0.042534154 container create 4b8ac5311c98732cb6584cf13a8175006eda42dbd9556540f3752cfaa968231a (image=quay.io/ceph/ceph:v20, name=vigilant_davinci, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:14:55 np0005558317 systemd[1]: Started libpod-conmon-4b8ac5311c98732cb6584cf13a8175006eda42dbd9556540f3752cfaa968231a.scope.
Dec 13 02:14:55 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:55 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00cf03bf2dadab4b291d72c95ff85c29ad2806389524588287c2d5609524aa2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:55 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00cf03bf2dadab4b291d72c95ff85c29ad2806389524588287c2d5609524aa2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:55 np0005558317 podman[92978]: 2025-12-13 07:14:55.30057246 +0000 UTC m=+0.027722826 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:55 np0005558317 podman[92978]: 2025-12-13 07:14:55.399940474 +0000 UTC m=+0.127090860 container init 4b8ac5311c98732cb6584cf13a8175006eda42dbd9556540f3752cfaa968231a (image=quay.io/ceph/ceph:v20, name=vigilant_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:55 np0005558317 podman[92978]: 2025-12-13 07:14:55.404203065 +0000 UTC m=+0.131353431 container start 4b8ac5311c98732cb6584cf13a8175006eda42dbd9556540f3752cfaa968231a (image=quay.io/ceph/ceph:v20, name=vigilant_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 02:14:55 np0005558317 podman[92978]: 2025-12-13 07:14:55.405581536 +0000 UTC m=+0.132731962 container attach 4b8ac5311c98732cb6584cf13a8175006eda42dbd9556540f3752cfaa968231a (image=quay.io/ceph/ceph:v20, name=vigilant_davinci, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 02:14:55 np0005558317 lvm[93064]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:14:55 np0005558317 lvm[93065]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:14:55 np0005558317 lvm[93064]: VG ceph_vg0 finished
Dec 13 02:14:55 np0005558317 lvm[93065]: VG ceph_vg1 finished
Dec 13 02:14:55 np0005558317 lvm[93068]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:14:55 np0005558317 lvm[93068]: VG ceph_vg2 finished
Dec 13 02:14:55 np0005558317 boring_albattani[92926]: {}
Dec 13 02:14:55 np0005558317 systemd[1]: libpod-b8bcb33142e5a6c09a80b7189d4963db35608773ef024ddd231c43c4d3658c83.scope: Deactivated successfully.
Dec 13 02:14:55 np0005558317 conmon[92926]: conmon b8bcb33142e5a6c09a80 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8bcb33142e5a6c09a80b7189d4963db35608773ef024ddd231c43c4d3658c83.scope/container/memory.events
Dec 13 02:14:55 np0005558317 podman[93071]: 2025-12-13 07:14:55.62880108 +0000 UTC m=+0.017750854 container died b8bcb33142e5a6c09a80b7189d4963db35608773ef024ddd231c43c4d3658c83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_albattani, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:55 np0005558317 systemd[1]: var-lib-containers-storage-overlay-506ba23af6da1ee6784a1ebc4adea43abc07a2e956a59fe8837ec08957128472-merged.mount: Deactivated successfully.
Dec 13 02:14:55 np0005558317 podman[93071]: 2025-12-13 07:14:55.649815026 +0000 UTC m=+0.038764791 container remove b8bcb33142e5a6c09a80b7189d4963db35608773ef024ddd231c43c4d3658c83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:55 np0005558317 systemd[1]: libpod-conmon-b8bcb33142e5a6c09a80b7189d4963db35608773ef024ddd231c43c4d3658c83.scope: Deactivated successfully.
Dec 13 02:14:55 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:14:55 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:55 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:14:55 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:55 np0005558317 ceph-mgr[75200]: [progress INFO root] update: starting ev 2fdfac95-ecea-4b6d-9d0b-497dbccd0217 (Updating rgw.rgw deployment (+1 -> 1))
Dec 13 02:14:55 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kikquh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 13 02:14:55 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kikquh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Dec 13 02:14:55 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kikquh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 13 02:14:55 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec 13 02:14:55 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:55 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:14:55 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:14:55 np0005558317 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.kikquh on compute-0
Dec 13 02:14:55 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.kikquh on compute-0
Dec 13 02:14:55 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Dec 13 02:14:55 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3324588932' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Dec 13 02:14:55 np0005558317 vigilant_davinci[93025]: [client.openstack]
Dec 13 02:14:55 np0005558317 vigilant_davinci[93025]: #011key = AQDvET1pAAAAABAAXRTVwZkpmvDiKzdXsEX84w==
Dec 13 02:14:55 np0005558317 vigilant_davinci[93025]: #011caps mgr = "allow *"
Dec 13 02:14:55 np0005558317 vigilant_davinci[93025]: #011caps mon = "profile rbd"
Dec 13 02:14:55 np0005558317 vigilant_davinci[93025]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Dec 13 02:14:55 np0005558317 systemd[1]: libpod-4b8ac5311c98732cb6584cf13a8175006eda42dbd9556540f3752cfaa968231a.scope: Deactivated successfully.
Dec 13 02:14:55 np0005558317 podman[92978]: 2025-12-13 07:14:55.825738306 +0000 UTC m=+0.552888672 container died 4b8ac5311c98732cb6584cf13a8175006eda42dbd9556540f3752cfaa968231a (image=quay.io/ceph/ceph:v20, name=vigilant_davinci, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 02:14:55 np0005558317 systemd[1]: var-lib-containers-storage-overlay-e00cf03bf2dadab4b291d72c95ff85c29ad2806389524588287c2d5609524aa2-merged.mount: Deactivated successfully.
Dec 13 02:14:55 np0005558317 podman[92978]: 2025-12-13 07:14:55.848211615 +0000 UTC m=+0.575361981 container remove 4b8ac5311c98732cb6584cf13a8175006eda42dbd9556540f3752cfaa968231a (image=quay.io/ceph/ceph:v20, name=vigilant_davinci, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:55 np0005558317 systemd[1]: libpod-conmon-4b8ac5311c98732cb6584cf13a8175006eda42dbd9556540f3752cfaa968231a.scope: Deactivated successfully.
Dec 13 02:14:56 np0005558317 podman[93177]: 2025-12-13 07:14:56.086709315 +0000 UTC m=+0.025708250 container create f10896be36c6b99e3bbe0052972ed2a3f15dc192d4c6f0621d0d3155a6f3a40d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 02:14:56 np0005558317 systemd[1]: Started libpod-conmon-f10896be36c6b99e3bbe0052972ed2a3f15dc192d4c6f0621d0d3155a6f3a40d.scope.
Dec 13 02:14:56 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:56 np0005558317 podman[93177]: 2025-12-13 07:14:56.132166342 +0000 UTC m=+0.071165287 container init f10896be36c6b99e3bbe0052972ed2a3f15dc192d4c6f0621d0d3155a6f3a40d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_kalam, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:14:56 np0005558317 podman[93177]: 2025-12-13 07:14:56.136615083 +0000 UTC m=+0.075614019 container start f10896be36c6b99e3bbe0052972ed2a3f15dc192d4c6f0621d0d3155a6f3a40d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_kalam, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 02:14:56 np0005558317 nice_kalam[93190]: 167 167
Dec 13 02:14:56 np0005558317 systemd[1]: libpod-f10896be36c6b99e3bbe0052972ed2a3f15dc192d4c6f0621d0d3155a6f3a40d.scope: Deactivated successfully.
Dec 13 02:14:56 np0005558317 podman[93177]: 2025-12-13 07:14:56.140835396 +0000 UTC m=+0.079834351 container attach f10896be36c6b99e3bbe0052972ed2a3f15dc192d4c6f0621d0d3155a6f3a40d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 02:14:56 np0005558317 podman[93177]: 2025-12-13 07:14:56.141043277 +0000 UTC m=+0.080042212 container died f10896be36c6b99e3bbe0052972ed2a3f15dc192d4c6f0621d0d3155a6f3a40d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_kalam, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 02:14:56 np0005558317 systemd[1]: var-lib-containers-storage-overlay-e2f7eab366051ed7ed9dae7d44c9bebec0e2e9bb2bd4f8865464fb78a5d3e2c4-merged.mount: Deactivated successfully.
Dec 13 02:14:56 np0005558317 podman[93177]: 2025-12-13 07:14:56.158341869 +0000 UTC m=+0.097340804 container remove f10896be36c6b99e3bbe0052972ed2a3f15dc192d4c6f0621d0d3155a6f3a40d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_kalam, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 02:14:56 np0005558317 podman[93177]: 2025-12-13 07:14:56.076223547 +0000 UTC m=+0.015222502 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:56 np0005558317 systemd[1]: libpod-conmon-f10896be36c6b99e3bbe0052972ed2a3f15dc192d4c6f0621d0d3155a6f3a40d.scope: Deactivated successfully.
Dec 13 02:14:56 np0005558317 systemd[1]: Reloading.
Dec 13 02:14:56 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v64: 193 pgs: 36 peering, 157 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:14:56 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:56 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:56 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kikquh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Dec 13 02:14:56 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kikquh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 13 02:14:56 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:56 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/3324588932' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Dec 13 02:14:56 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:14:56 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:14:56 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Dec 13 02:14:56 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Dec 13 02:14:56 np0005558317 systemd[1]: Reloading.
Dec 13 02:14:56 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:14:56 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:14:56 np0005558317 systemd[1]: Starting Ceph rgw.rgw.compute-0.kikquh for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de...
Dec 13 02:14:56 np0005558317 podman[93422]: 2025-12-13 07:14:56.810903878 +0000 UTC m=+0.027521908 container create 69ac193e949f4ec1c85ec7f6eeac603563ef58b890ae60370ce1d5f216a4080c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-rgw-rgw-compute-0-kikquh, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 02:14:56 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e333fa43a805bfdea08e5fddc8346c8cf7d0194486896d26b03368e3d204987f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:56 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e333fa43a805bfdea08e5fddc8346c8cf7d0194486896d26b03368e3d204987f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:56 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e333fa43a805bfdea08e5fddc8346c8cf7d0194486896d26b03368e3d204987f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:56 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e333fa43a805bfdea08e5fddc8346c8cf7d0194486896d26b03368e3d204987f/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.kikquh supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:56 np0005558317 podman[93422]: 2025-12-13 07:14:56.849930158 +0000 UTC m=+0.066548198 container init 69ac193e949f4ec1c85ec7f6eeac603563ef58b890ae60370ce1d5f216a4080c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-rgw-rgw-compute-0-kikquh, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:56 np0005558317 podman[93422]: 2025-12-13 07:14:56.854302286 +0000 UTC m=+0.070920315 container start 69ac193e949f4ec1c85ec7f6eeac603563ef58b890ae60370ce1d5f216a4080c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-rgw-rgw-compute-0-kikquh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 02:14:56 np0005558317 bash[93422]: 69ac193e949f4ec1c85ec7f6eeac603563ef58b890ae60370ce1d5f216a4080c
Dec 13 02:14:56 np0005558317 podman[93422]: 2025-12-13 07:14:56.799794798 +0000 UTC m=+0.016412848 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:56 np0005558317 systemd[1]: Started Ceph rgw.rgw.compute-0.kikquh for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 02:14:56 np0005558317 radosgw[93487]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec 13 02:14:56 np0005558317 radosgw[93487]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Dec 13 02:14:56 np0005558317 radosgw[93487]: framework: beast
Dec 13 02:14:56 np0005558317 radosgw[93487]: framework conf key: endpoint, val: 192.168.122.100:8082
Dec 13 02:14:56 np0005558317 radosgw[93487]: init_numa not setting numa affinity
Dec 13 02:14:56 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:14:56 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:56 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:14:56 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:56 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 13 02:14:56 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:56 np0005558317 ceph-mgr[75200]: [progress INFO root] complete: finished ev 2fdfac95-ecea-4b6d-9d0b-497dbccd0217 (Updating rgw.rgw deployment (+1 -> 1))
Dec 13 02:14:56 np0005558317 ceph-mgr[75200]: [progress INFO root] Completed event 2fdfac95-ecea-4b6d-9d0b-497dbccd0217 (Updating rgw.rgw deployment (+1 -> 1)) in 1 seconds
Dec 13 02:14:56 np0005558317 ceph-mgr[75200]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Dec 13 02:14:56 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec 13 02:14:56 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 13 02:14:56 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:56 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 13 02:14:56 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:56 np0005558317 ceph-mgr[75200]: [progress INFO root] update: starting ev 3f96108e-8b09-4845-98c0-5ac9311cc03e (Updating mds.cephfs deployment (+1 -> 1))
Dec 13 02:14:56 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zwnyoz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 13 02:14:56 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zwnyoz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 13 02:14:56 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zwnyoz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 13 02:14:56 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:14:56 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:14:56 np0005558317 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.zwnyoz on compute-0
Dec 13 02:14:56 np0005558317 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.zwnyoz on compute-0
Dec 13 02:14:56 np0005558317 ansible-async_wrapper.py[93489]: Invoked with j932427807995 30 /home/zuul/.ansible/tmp/ansible-tmp-1765610096.6256976-37138-243185074326914/AnsiballZ_command.py _
Dec 13 02:14:56 np0005558317 ansible-async_wrapper.py[93568]: Starting module and watcher
Dec 13 02:14:56 np0005558317 ansible-async_wrapper.py[93568]: Start watching 93569 (30)
Dec 13 02:14:57 np0005558317 ansible-async_wrapper.py[93569]: Start module (93569)
Dec 13 02:14:57 np0005558317 ansible-async_wrapper.py[93489]: Return async_wrapper task started.
Dec 13 02:14:57 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Dec 13 02:14:57 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Dec 13 02:14:57 np0005558317 python3[93572]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:57 np0005558317 podman[93573]: 2025-12-13 07:14:57.167176357 +0000 UTC m=+0.028804460 container create 8532d33052f7cc55b77631d2784e20c17fcd52bd9677fa2c17af423fb360c8d7 (image=quay.io/ceph/ceph:v20, name=naughty_raman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:57 np0005558317 systemd[1]: Started libpod-conmon-8532d33052f7cc55b77631d2784e20c17fcd52bd9677fa2c17af423fb360c8d7.scope.
Dec 13 02:14:57 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:57 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ddf86cd2100e3e80d23f6321295c0cd3a261e381695184d4d4290bf5c7893fa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:57 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ddf86cd2100e3e80d23f6321295c0cd3a261e381695184d4d4290bf5c7893fa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:57 np0005558317 podman[93573]: 2025-12-13 07:14:57.220478088 +0000 UTC m=+0.082106202 container init 8532d33052f7cc55b77631d2784e20c17fcd52bd9677fa2c17af423fb360c8d7 (image=quay.io/ceph/ceph:v20, name=naughty_raman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:57 np0005558317 ceph-mon[74928]: Deploying daemon rgw.rgw.compute-0.kikquh on compute-0
Dec 13 02:14:57 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:57 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:57 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:57 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:57 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:57 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zwnyoz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 13 02:14:57 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zwnyoz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 13 02:14:57 np0005558317 podman[93573]: 2025-12-13 07:14:57.227395188 +0000 UTC m=+0.089023291 container start 8532d33052f7cc55b77631d2784e20c17fcd52bd9677fa2c17af423fb360c8d7 (image=quay.io/ceph/ceph:v20, name=naughty_raman, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:14:57 np0005558317 podman[93573]: 2025-12-13 07:14:57.233719696 +0000 UTC m=+0.095347809 container attach 8532d33052f7cc55b77631d2784e20c17fcd52bd9677fa2c17af423fb360c8d7 (image=quay.io/ceph/ceph:v20, name=naughty_raman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 02:14:57 np0005558317 podman[93573]: 2025-12-13 07:14:57.156222158 +0000 UTC m=+0.017850271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:57 np0005558317 podman[93625]: 2025-12-13 07:14:57.336155364 +0000 UTC m=+0.035409704 container create 099fea401222931ce446c8e41dcf83c5c318918889bec2bedc3606f760a6084f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:14:57 np0005558317 systemd[1]: Started libpod-conmon-099fea401222931ce446c8e41dcf83c5c318918889bec2bedc3606f760a6084f.scope.
Dec 13 02:14:57 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:57 np0005558317 podman[93625]: 2025-12-13 07:14:57.388724709 +0000 UTC m=+0.087979069 container init 099fea401222931ce446c8e41dcf83c5c318918889bec2bedc3606f760a6084f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:57 np0005558317 podman[93625]: 2025-12-13 07:14:57.393366424 +0000 UTC m=+0.092620764 container start 099fea401222931ce446c8e41dcf83c5c318918889bec2bedc3606f760a6084f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:57 np0005558317 podman[93625]: 2025-12-13 07:14:57.395673469 +0000 UTC m=+0.094927830 container attach 099fea401222931ce446c8e41dcf83c5c318918889bec2bedc3606f760a6084f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:57 np0005558317 festive_cannon[93657]: 167 167
Dec 13 02:14:57 np0005558317 systemd[1]: libpod-099fea401222931ce446c8e41dcf83c5c318918889bec2bedc3606f760a6084f.scope: Deactivated successfully.
Dec 13 02:14:57 np0005558317 podman[93625]: 2025-12-13 07:14:57.397397891 +0000 UTC m=+0.096652241 container died 099fea401222931ce446c8e41dcf83c5c318918889bec2bedc3606f760a6084f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cannon, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:57 np0005558317 systemd[1]: var-lib-containers-storage-overlay-c4febb36d8f08c0f19621df737b92154125202a610a9adf0bf160b70a89699d4-merged.mount: Deactivated successfully.
Dec 13 02:14:57 np0005558317 podman[93625]: 2025-12-13 07:14:57.318623052 +0000 UTC m=+0.017877402 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:57 np0005558317 podman[93625]: 2025-12-13 07:14:57.416992229 +0000 UTC m=+0.116246569 container remove 099fea401222931ce446c8e41dcf83c5c318918889bec2bedc3606f760a6084f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 02:14:57 np0005558317 systemd[1]: libpod-conmon-099fea401222931ce446c8e41dcf83c5c318918889bec2bedc3606f760a6084f.scope: Deactivated successfully.
Dec 13 02:14:57 np0005558317 systemd[1]: Reloading.
Dec 13 02:14:57 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:14:57 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:14:57 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14251 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 02:14:57 np0005558317 naughty_raman[93593]: 
Dec 13 02:14:57 np0005558317 naughty_raman[93593]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 13 02:14:57 np0005558317 podman[93573]: 2025-12-13 07:14:57.583802472 +0000 UTC m=+0.445430585 container died 8532d33052f7cc55b77631d2784e20c17fcd52bd9677fa2c17af423fb360c8d7 (image=quay.io/ceph/ceph:v20, name=naughty_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 02:14:57 np0005558317 systemd[1]: libpod-8532d33052f7cc55b77631d2784e20c17fcd52bd9677fa2c17af423fb360c8d7.scope: Deactivated successfully.
Dec 13 02:14:57 np0005558317 systemd[1]: var-lib-containers-storage-overlay-4ddf86cd2100e3e80d23f6321295c0cd3a261e381695184d4d4290bf5c7893fa-merged.mount: Deactivated successfully.
Dec 13 02:14:57 np0005558317 podman[93573]: 2025-12-13 07:14:57.673519484 +0000 UTC m=+0.535147588 container remove 8532d33052f7cc55b77631d2784e20c17fcd52bd9677fa2c17af423fb360c8d7 (image=quay.io/ceph/ceph:v20, name=naughty_raman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:57 np0005558317 systemd[1]: libpod-conmon-8532d33052f7cc55b77631d2784e20c17fcd52bd9677fa2c17af423fb360c8d7.scope: Deactivated successfully.
Dec 13 02:14:57 np0005558317 ansible-async_wrapper.py[93569]: Module complete (93569)
Dec 13 02:14:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:14:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec 13 02:14:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Dec 13 02:14:57 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Dec 13 02:14:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Dec 13 02:14:57 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1471949362' entity='client.rgw.rgw.compute-0.kikquh' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Dec 13 02:14:57 np0005558317 systemd[1]: Reloading.
Dec 13 02:14:57 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:14:57 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:14:57 np0005558317 systemd[1]: Starting Ceph mds.cephfs.compute-0.zwnyoz for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de...
Dec 13 02:14:58 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 35 pg[8.0( empty local-lis/les=0/0 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:58 np0005558317 podman[93821]: 2025-12-13 07:14:58.10377739 +0000 UTC m=+0.031613199 container create c65ab07d188ff0da2857402181db229477778de43e052dbfea38f8c212c47f50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mds-cephfs-compute-0-zwnyoz, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 02:14:58 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf02c05f4f9aa98d6346eecda97a1eaa8bcc35c64dca7a62c6f94dc4c5161700/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:58 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf02c05f4f9aa98d6346eecda97a1eaa8bcc35c64dca7a62c6f94dc4c5161700/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:58 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf02c05f4f9aa98d6346eecda97a1eaa8bcc35c64dca7a62c6f94dc4c5161700/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:58 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf02c05f4f9aa98d6346eecda97a1eaa8bcc35c64dca7a62c6f94dc4c5161700/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.zwnyoz supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:58 np0005558317 podman[93821]: 2025-12-13 07:14:58.154740967 +0000 UTC m=+0.082576805 container init c65ab07d188ff0da2857402181db229477778de43e052dbfea38f8c212c47f50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mds-cephfs-compute-0-zwnyoz, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:58 np0005558317 podman[93821]: 2025-12-13 07:14:58.158996696 +0000 UTC m=+0.086832505 container start c65ab07d188ff0da2857402181db229477778de43e052dbfea38f8c212c47f50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mds-cephfs-compute-0-zwnyoz, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec 13 02:14:58 np0005558317 bash[93821]: c65ab07d188ff0da2857402181db229477778de43e052dbfea38f8c212c47f50
Dec 13 02:14:58 np0005558317 podman[93821]: 2025-12-13 07:14:58.089733445 +0000 UTC m=+0.017569273 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:58 np0005558317 systemd[1]: Started Ceph mds.cephfs.compute-0.zwnyoz for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 02:14:58 np0005558317 ceph-mds[93864]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 02:14:58 np0005558317 ceph-mds[93864]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Dec 13 02:14:58 np0005558317 ceph-mds[93864]: main not setting numa affinity
Dec 13 02:14:58 np0005558317 ceph-mds[93864]: pidfile_write: ignore empty --pid-file
Dec 13 02:14:58 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mds-cephfs-compute-0-zwnyoz[93860]: starting mds.cephfs.compute-0.zwnyoz at 
Dec 13 02:14:58 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v66: 194 pgs: 1 unknown, 36 peering, 157 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:14:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:14:58 np0005558317 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz Updating MDS map to version 2 from mon.0
Dec 13 02:14:58 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:14:58 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 13 02:14:58 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:58 np0005558317 ceph-mgr[75200]: [progress INFO root] complete: finished ev 3f96108e-8b09-4845-98c0-5ac9311cc03e (Updating mds.cephfs deployment (+1 -> 1))
Dec 13 02:14:58 np0005558317 ceph-mgr[75200]: [progress INFO root] Completed event 3f96108e-8b09-4845-98c0-5ac9311cc03e (Updating mds.cephfs deployment (+1 -> 1)) in 1 seconds
Dec 13 02:14:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Dec 13 02:14:58 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 13 02:14:58 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:58 np0005558317 ceph-mon[74928]: Saving service rgw.rgw spec with placement compute-0
Dec 13 02:14:58 np0005558317 ceph-mon[74928]: Deploying daemon mds.cephfs.compute-0.zwnyoz on compute-0
Dec 13 02:14:58 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/1471949362' entity='client.rgw.rgw.compute-0.kikquh' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Dec 13 02:14:58 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:58 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:58 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:58 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:58 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:58 np0005558317 python3[93859]: ansible-ansible.legacy.async_status Invoked with jid=j932427807995.93489 mode=status _async_dir=/root/.ansible_async
Dec 13 02:14:58 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Dec 13 02:14:58 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Dec 13 02:14:58 np0005558317 python3[94006]: ansible-ansible.legacy.async_status Invoked with jid=j932427807995.93489 mode=cleanup _async_dir=/root/.ansible_async
Dec 13 02:14:58 np0005558317 podman[94041]: 2025-12-13 07:14:58.680523703 +0000 UTC m=+0.040004370 container exec 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 02:14:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec 13 02:14:58 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1471949362' entity='client.rgw.rgw.compute-0.kikquh' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 13 02:14:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Dec 13 02:14:58 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 36 pg[8.0( empty local-lis/les=35/36 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:14:58 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Dec 13 02:14:58 np0005558317 podman[94041]: 2025-12-13 07:14:58.769748392 +0000 UTC m=+0.129229059 container exec_died 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:14:58 np0005558317 python3[94666]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:14:58 np0005558317 podman[94705]: 2025-12-13 07:14:58.999612615 +0000 UTC m=+0.030389990 container create 631e5fc48d87819a76f4b49b83ecc7bcd38daf7509c9e36b15f526b8b4fb7a62 (image=quay.io/ceph/ceph:v20, name=compassionate_shamir, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:59 np0005558317 systemd[1]: Started libpod-conmon-631e5fc48d87819a76f4b49b83ecc7bcd38daf7509c9e36b15f526b8b4fb7a62.scope.
Dec 13 02:14:59 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Dec 13 02:14:59 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:59 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07504719ca99500a076fde8787a4c051858b21071ad1341a4e32a4681ab3e2ae/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:59 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07504719ca99500a076fde8787a4c051858b21071ad1341a4e32a4681ab3e2ae/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:59 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Dec 13 02:14:59 np0005558317 podman[94705]: 2025-12-13 07:14:59.054785483 +0000 UTC m=+0.085562878 container init 631e5fc48d87819a76f4b49b83ecc7bcd38daf7509c9e36b15f526b8b4fb7a62 (image=quay.io/ceph/ceph:v20, name=compassionate_shamir, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:59 np0005558317 podman[94705]: 2025-12-13 07:14:59.060552912 +0000 UTC m=+0.091330288 container start 631e5fc48d87819a76f4b49b83ecc7bcd38daf7509c9e36b15f526b8b4fb7a62 (image=quay.io/ceph/ceph:v20, name=compassionate_shamir, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:59 np0005558317 podman[94705]: 2025-12-13 07:14:59.062304094 +0000 UTC m=+0.093081469 container attach 631e5fc48d87819a76f4b49b83ecc7bcd38daf7509c9e36b15f526b8b4fb7a62 (image=quay.io/ceph/ceph:v20, name=compassionate_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:59 np0005558317 podman[94705]: 2025-12-13 07:14:58.986569951 +0000 UTC m=+0.017347336 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:14:59 np0005558317 ceph-mgr[75200]: [progress INFO root] Writing back 12 completed events
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).mds e3 new map
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2025-12-13T07:14:59:201141+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-13T07:14:50.409831+0000#012modified#0112025-12-13T07:14:50.409831+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.zwnyoz{-1:14253} state up:standby seq 1 addr [v2:192.168.122.100:6814/3518175939,v1:192.168.122.100:6815/3518175939] compat {c=[1],r=[1],i=[1fff]}]
Dec 13 02:14:59 np0005558317 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz Updating MDS map to version 3 from mon.0
Dec 13 02:14:59 np0005558317 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz Monitors have assigned me to become a standby
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3518175939,v1:192.168.122.100:6815/3518175939] up:boot
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/3518175939,v1:192.168.122.100:6815/3518175939] as mds.0
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.zwnyoz assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.zwnyoz"} v 0)
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.zwnyoz"} : dispatch
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).mds e3 all = 0
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).mds e4 new map
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2025-12-13T07:14:59:205999+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-13T07:14:50.409831+0000#012modified#0112025-12-13T07:14:59.205991+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14253}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-0.zwnyoz{0:14253} state up:creating seq 1 addr [v2:192.168.122.100:6814/3518175939,v1:192.168.122.100:6815/3518175939] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.zwnyoz=up:creating}
Dec 13 02:14:59 np0005558317 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz Updating MDS map to version 4 from mon.0
Dec 13 02:14:59 np0005558317 ceph-mds[93864]: mds.0.4 handle_mds_map I am now mds.0.4
Dec 13 02:14:59 np0005558317 ceph-mds[93864]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Dec 13 02:14:59 np0005558317 ceph-mds[93864]: mds.0.cache creating system inode with ino:0x1
Dec 13 02:14:59 np0005558317 ceph-mds[93864]: mds.0.cache creating system inode with ino:0x100
Dec 13 02:14:59 np0005558317 ceph-mds[93864]: mds.0.cache creating system inode with ino:0x600
Dec 13 02:14:59 np0005558317 ceph-mds[93864]: mds.0.cache creating system inode with ino:0x601
Dec 13 02:14:59 np0005558317 ceph-mds[93864]: mds.0.cache creating system inode with ino:0x602
Dec 13 02:14:59 np0005558317 ceph-mds[93864]: mds.0.cache creating system inode with ino:0x603
Dec 13 02:14:59 np0005558317 ceph-mds[93864]: mds.0.cache creating system inode with ino:0x604
Dec 13 02:14:59 np0005558317 ceph-mds[93864]: mds.0.cache creating system inode with ino:0x605
Dec 13 02:14:59 np0005558317 ceph-mds[93864]: mds.0.cache creating system inode with ino:0x606
Dec 13 02:14:59 np0005558317 ceph-mds[93864]: mds.0.cache creating system inode with ino:0x607
Dec 13 02:14:59 np0005558317 ceph-mds[93864]: mds.0.cache creating system inode with ino:0x608
Dec 13 02:14:59 np0005558317 ceph-mds[93864]: mds.0.cache creating system inode with ino:0x609
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/1471949362' entity='client.rgw.rgw.compute-0.kikquh' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: daemon mds.cephfs.compute-0.zwnyoz assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: Cluster is now healthy
Dec 13 02:14:59 np0005558317 ceph-mds[93864]: mds.0.4 creating_done
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.zwnyoz is now active in filesystem cephfs as rank 0
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:14:59 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 02:14:59 np0005558317 compassionate_shamir[94731]: 
Dec 13 02:14:59 np0005558317 compassionate_shamir[94731]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 13 02:14:59 np0005558317 systemd[1]: libpod-631e5fc48d87819a76f4b49b83ecc7bcd38daf7509c9e36b15f526b8b4fb7a62.scope: Deactivated successfully.
Dec 13 02:14:59 np0005558317 conmon[94731]: conmon 631e5fc48d87819a76f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-631e5fc48d87819a76f4b49b83ecc7bcd38daf7509c9e36b15f526b8b4fb7a62.scope/container/memory.events
Dec 13 02:14:59 np0005558317 podman[94705]: 2025-12-13 07:14:59.433553983 +0000 UTC m=+0.464331358 container died 631e5fc48d87819a76f4b49b83ecc7bcd38daf7509c9e36b15f526b8b4fb7a62 (image=quay.io/ceph/ceph:v20, name=compassionate_shamir, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Dec 13 02:14:59 np0005558317 systemd[1]: var-lib-containers-storage-overlay-07504719ca99500a076fde8787a4c051858b21071ad1341a4e32a4681ab3e2ae-merged.mount: Deactivated successfully.
Dec 13 02:14:59 np0005558317 podman[94705]: 2025-12-13 07:14:59.453744291 +0000 UTC m=+0.484521666 container remove 631e5fc48d87819a76f4b49b83ecc7bcd38daf7509c9e36b15f526b8b4fb7a62 (image=quay.io/ceph/ceph:v20, name=compassionate_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 02:14:59 np0005558317 systemd[1]: libpod-conmon-631e5fc48d87819a76f4b49b83ecc7bcd38daf7509c9e36b15f526b8b4fb7a62.scope: Deactivated successfully.
Dec 13 02:14:59 np0005558317 podman[94897]: 2025-12-13 07:14:59.644322295 +0000 UTC m=+0.027956666 container create 51aada05e5a35dcf6f04b6108910c9d691d12ffabafeb76404cb582710a4c6a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:59 np0005558317 systemd[1]: Started libpod-conmon-51aada05e5a35dcf6f04b6108910c9d691d12ffabafeb76404cb582710a4c6a7.scope.
Dec 13 02:14:59 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:59 np0005558317 podman[94897]: 2025-12-13 07:14:59.689390241 +0000 UTC m=+0.073024622 container init 51aada05e5a35dcf6f04b6108910c9d691d12ffabafeb76404cb582710a4c6a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:59 np0005558317 podman[94897]: 2025-12-13 07:14:59.693743353 +0000 UTC m=+0.077377714 container start 51aada05e5a35dcf6f04b6108910c9d691d12ffabafeb76404cb582710a4c6a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:59 np0005558317 podman[94897]: 2025-12-13 07:14:59.694925744 +0000 UTC m=+0.078560126 container attach 51aada05e5a35dcf6f04b6108910c9d691d12ffabafeb76404cb582710a4c6a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_haibt, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:14:59 np0005558317 stoic_haibt[94911]: 167 167
Dec 13 02:14:59 np0005558317 systemd[1]: libpod-51aada05e5a35dcf6f04b6108910c9d691d12ffabafeb76404cb582710a4c6a7.scope: Deactivated successfully.
Dec 13 02:14:59 np0005558317 podman[94897]: 2025-12-13 07:14:59.697726479 +0000 UTC m=+0.081360840 container died 51aada05e5a35dcf6f04b6108910c9d691d12ffabafeb76404cb582710a4c6a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_haibt, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Dec 13 02:14:59 np0005558317 systemd[1]: var-lib-containers-storage-overlay-597ad554ce66a676e2f9788cd25e052e40c0d572de37155a73580f6819a25f8d-merged.mount: Deactivated successfully.
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Dec 13 02:14:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 37 pg[9.0( empty local-lis/les=0/0 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec 13 02:14:59 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Dec 13 02:14:59 np0005558317 podman[94897]: 2025-12-13 07:14:59.719619086 +0000 UTC m=+0.103253447 container remove 51aada05e5a35dcf6f04b6108910c9d691d12ffabafeb76404cb582710a4c6a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:14:59 np0005558317 podman[94897]: 2025-12-13 07:14:59.633165816 +0000 UTC m=+0.016800197 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:14:59 np0005558317 systemd[1]: libpod-conmon-51aada05e5a35dcf6f04b6108910c9d691d12ffabafeb76404cb582710a4c6a7.scope: Deactivated successfully.
Dec 13 02:14:59 np0005558317 podman[94932]: 2025-12-13 07:14:59.835403935 +0000 UTC m=+0.028737384 container create b8a87ebdcdd3f8c4f84e2efdd38c8156ed9e2f55a43ce5402daf7218c38f2b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_panini, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:14:59 np0005558317 systemd[1]: Started libpod-conmon-b8a87ebdcdd3f8c4f84e2efdd38c8156ed9e2f55a43ce5402daf7218c38f2b75.scope.
Dec 13 02:14:59 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:14:59 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a6eb9b1949a75ab7a724d9d6e0f80792d31e8398ea9f752a19ea24ca7c24a18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:59 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a6eb9b1949a75ab7a724d9d6e0f80792d31e8398ea9f752a19ea24ca7c24a18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:59 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a6eb9b1949a75ab7a724d9d6e0f80792d31e8398ea9f752a19ea24ca7c24a18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:59 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a6eb9b1949a75ab7a724d9d6e0f80792d31e8398ea9f752a19ea24ca7c24a18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:59 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a6eb9b1949a75ab7a724d9d6e0f80792d31e8398ea9f752a19ea24ca7c24a18/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:14:59 np0005558317 podman[94932]: 2025-12-13 07:14:59.900070367 +0000 UTC m=+0.093403835 container init b8a87ebdcdd3f8c4f84e2efdd38c8156ed9e2f55a43ce5402daf7218c38f2b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_panini, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:14:59 np0005558317 podman[94932]: 2025-12-13 07:14:59.905283295 +0000 UTC m=+0.098616743 container start b8a87ebdcdd3f8c4f84e2efdd38c8156ed9e2f55a43ce5402daf7218c38f2b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:14:59 np0005558317 podman[94932]: 2025-12-13 07:14:59.906476547 +0000 UTC m=+0.099809996 container attach b8a87ebdcdd3f8c4f84e2efdd38c8156ed9e2f55a43ce5402daf7218c38f2b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_panini, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:14:59 np0005558317 podman[94932]: 2025-12-13 07:14:59.824128573 +0000 UTC m=+0.017462031 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:15:00 np0005558317 python3[94977]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:15:00 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v69: 195 pgs: 2 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:15:00 np0005558317 podman[94985]: 2025-12-13 07:15:00.22198548 +0000 UTC m=+0.030024913 container create cc29cf4abdfb216040c9c11e5f2feb9647618a244db1e2290f76b6d1b43b5e19 (image=quay.io/ceph/ceph:v20, name=elastic_lederberg, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:15:00 np0005558317 ceph-mon[74928]: daemon mds.cephfs.compute-0.zwnyoz is now active in filesystem cephfs as rank 0
Dec 13 02:15:00 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:15:00 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:15:00 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:15:00 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:15:00 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:15:00 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Dec 13 02:15:00 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).mds e5 new map
Dec 13 02:15:00 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2025-12-13T07:15:00:227157+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-13T07:14:50.409831+0000#012modified#0112025-12-13T07:15:00.227156+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14253}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 14253 members: 14253#012[mds.cephfs.compute-0.zwnyoz{0:14253} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/3518175939,v1:192.168.122.100:6815/3518175939] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Dec 13 02:15:00 np0005558317 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz Updating MDS map to version 5 from mon.0
Dec 13 02:15:00 np0005558317 ceph-mds[93864]: mds.0.4 handle_mds_map I am now mds.0.4
Dec 13 02:15:00 np0005558317 ceph-mds[93864]: mds.0.4 handle_mds_map state change up:creating --> up:active
Dec 13 02:15:00 np0005558317 ceph-mds[93864]: mds.0.4 recovery_done -- successful recovery!
Dec 13 02:15:00 np0005558317 ceph-mds[93864]: mds.0.4 active_start
Dec 13 02:15:00 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3518175939,v1:192.168.122.100:6815/3518175939] up:active
Dec 13 02:15:00 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.zwnyoz=up:active}
Dec 13 02:15:00 np0005558317 systemd[1]: Started libpod-conmon-cc29cf4abdfb216040c9c11e5f2feb9647618a244db1e2290f76b6d1b43b5e19.scope.
Dec 13 02:15:00 np0005558317 objective_panini[94945]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:15:00 np0005558317 objective_panini[94945]: --> All data devices are unavailable
Dec 13 02:15:00 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:15:00 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44f586ba1bc7c23302f55082e53b8073af43974baea8b816d7878fbb7d98c94/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:00 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44f586ba1bc7c23302f55082e53b8073af43974baea8b816d7878fbb7d98c94/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:00 np0005558317 podman[94985]: 2025-12-13 07:15:00.280554271 +0000 UTC m=+0.088593694 container init cc29cf4abdfb216040c9c11e5f2feb9647618a244db1e2290f76b6d1b43b5e19 (image=quay.io/ceph/ceph:v20, name=elastic_lederberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:15:00 np0005558317 podman[94985]: 2025-12-13 07:15:00.286499556 +0000 UTC m=+0.094538979 container start cc29cf4abdfb216040c9c11e5f2feb9647618a244db1e2290f76b6d1b43b5e19 (image=quay.io/ceph/ceph:v20, name=elastic_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 02:15:00 np0005558317 podman[94985]: 2025-12-13 07:15:00.287932588 +0000 UTC m=+0.095972011 container attach cc29cf4abdfb216040c9c11e5f2feb9647618a244db1e2290f76b6d1b43b5e19 (image=quay.io/ceph/ceph:v20, name=elastic_lederberg, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 02:15:00 np0005558317 systemd[1]: libpod-b8a87ebdcdd3f8c4f84e2efdd38c8156ed9e2f55a43ce5402daf7218c38f2b75.scope: Deactivated successfully.
Dec 13 02:15:00 np0005558317 podman[94932]: 2025-12-13 07:15:00.296599889 +0000 UTC m=+0.489933357 container died b8a87ebdcdd3f8c4f84e2efdd38c8156ed9e2f55a43ce5402daf7218c38f2b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_panini, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 02:15:00 np0005558317 podman[94985]: 2025-12-13 07:15:00.209479686 +0000 UTC m=+0.017519129 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:15:00 np0005558317 podman[94932]: 2025-12-13 07:15:00.31836662 +0000 UTC m=+0.511700069 container remove b8a87ebdcdd3f8c4f84e2efdd38c8156ed9e2f55a43ce5402daf7218c38f2b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_panini, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Dec 13 02:15:00 np0005558317 systemd[1]: libpod-conmon-b8a87ebdcdd3f8c4f84e2efdd38c8156ed9e2f55a43ce5402daf7218c38f2b75.scope: Deactivated successfully.
Dec 13 02:15:00 np0005558317 systemd[1]: var-lib-containers-storage-overlay-2a6eb9b1949a75ab7a724d9d6e0f80792d31e8398ea9f752a19ea24ca7c24a18-merged.mount: Deactivated successfully.
Dec 13 02:15:00 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 02:15:00 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.kikquh", "name": "rgw_frontends"} v 0)
Dec 13 02:15:00 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kikquh", "name": "rgw_frontends"} : dispatch
Dec 13 02:15:00 np0005558317 elastic_lederberg[95006]: 
Dec 13 02:15:00 np0005558317 elastic_lederberg[95006]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Dec 13 02:15:00 np0005558317 systemd[1]: libpod-cc29cf4abdfb216040c9c11e5f2feb9647618a244db1e2290f76b6d1b43b5e19.scope: Deactivated successfully.
Dec 13 02:15:00 np0005558317 podman[94985]: 2025-12-13 07:15:00.622504231 +0000 UTC m=+0.430543653 container died cc29cf4abdfb216040c9c11e5f2feb9647618a244db1e2290f76b6d1b43b5e19 (image=quay.io/ceph/ceph:v20, name=elastic_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 02:15:00 np0005558317 systemd[1]: var-lib-containers-storage-overlay-c44f586ba1bc7c23302f55082e53b8073af43974baea8b816d7878fbb7d98c94-merged.mount: Deactivated successfully.
Dec 13 02:15:00 np0005558317 podman[94985]: 2025-12-13 07:15:00.647245643 +0000 UTC m=+0.455285066 container remove cc29cf4abdfb216040c9c11e5f2feb9647618a244db1e2290f76b6d1b43b5e19 (image=quay.io/ceph/ceph:v20, name=elastic_lederberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 02:15:00 np0005558317 systemd[1]: libpod-conmon-cc29cf4abdfb216040c9c11e5f2feb9647618a244db1e2290f76b6d1b43b5e19.scope: Deactivated successfully.
Dec 13 02:15:00 np0005558317 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 02:15:00 np0005558317 podman[95107]: 2025-12-13 07:15:00.679372846 +0000 UTC m=+0.031627185 container create 2628a455774141af8be81e344af0d9e7e19e7e9df2183edf11c9f48d5a6fa5b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 02:15:00 np0005558317 systemd[1]: Started libpod-conmon-2628a455774141af8be81e344af0d9e7e19e7e9df2183edf11c9f48d5a6fa5b5.scope.
Dec 13 02:15:00 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec 13 02:15:00 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 13 02:15:00 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Dec 13 02:15:00 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Dec 13 02:15:00 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 38 pg[9.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:00 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:15:00 np0005558317 podman[95107]: 2025-12-13 07:15:00.737232493 +0000 UTC m=+0.089486843 container init 2628a455774141af8be81e344af0d9e7e19e7e9df2183edf11c9f48d5a6fa5b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_sanderson, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:15:00 np0005558317 podman[95107]: 2025-12-13 07:15:00.742535812 +0000 UTC m=+0.094790141 container start 2628a455774141af8be81e344af0d9e7e19e7e9df2183edf11c9f48d5a6fa5b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:15:00 np0005558317 podman[95107]: 2025-12-13 07:15:00.743925704 +0000 UTC m=+0.096180033 container attach 2628a455774141af8be81e344af0d9e7e19e7e9df2183edf11c9f48d5a6fa5b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:15:00 np0005558317 determined_sanderson[95126]: 167 167
Dec 13 02:15:00 np0005558317 systemd[1]: libpod-2628a455774141af8be81e344af0d9e7e19e7e9df2183edf11c9f48d5a6fa5b5.scope: Deactivated successfully.
Dec 13 02:15:00 np0005558317 podman[95107]: 2025-12-13 07:15:00.74615859 +0000 UTC m=+0.098412919 container died 2628a455774141af8be81e344af0d9e7e19e7e9df2183edf11c9f48d5a6fa5b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_sanderson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 02:15:00 np0005558317 systemd[1]: var-lib-containers-storage-overlay-e3a2571871d8bf96b6fba481ef06417170af90b23fd81f1d5a1a70abf5772651-merged.mount: Deactivated successfully.
Dec 13 02:15:00 np0005558317 podman[95107]: 2025-12-13 07:15:00.666739612 +0000 UTC m=+0.018993961 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:15:00 np0005558317 podman[95107]: 2025-12-13 07:15:00.767318992 +0000 UTC m=+0.119573321 container remove 2628a455774141af8be81e344af0d9e7e19e7e9df2183edf11c9f48d5a6fa5b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:15:00 np0005558317 systemd[1]: libpod-conmon-2628a455774141af8be81e344af0d9e7e19e7e9df2183edf11c9f48d5a6fa5b5.scope: Deactivated successfully.
Dec 13 02:15:00 np0005558317 podman[95149]: 2025-12-13 07:15:00.882394909 +0000 UTC m=+0.029400900 container create 968d2d5ad450dd8b3da2fb3bc46181b5b735fcee29b10ad271709823a14f2ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 02:15:00 np0005558317 systemd[1]: Started libpod-conmon-968d2d5ad450dd8b3da2fb3bc46181b5b735fcee29b10ad271709823a14f2ec3.scope.
Dec 13 02:15:00 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:15:00 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2546699b44423471dabb82bda6c93d714d0739a991974b6cefa963ee20ec328f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:00 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2546699b44423471dabb82bda6c93d714d0739a991974b6cefa963ee20ec328f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:00 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2546699b44423471dabb82bda6c93d714d0739a991974b6cefa963ee20ec328f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:00 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2546699b44423471dabb82bda6c93d714d0739a991974b6cefa963ee20ec328f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:00 np0005558317 podman[95149]: 2025-12-13 07:15:00.950815256 +0000 UTC m=+0.097821247 container init 968d2d5ad450dd8b3da2fb3bc46181b5b735fcee29b10ad271709823a14f2ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_jemison, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 02:15:00 np0005558317 podman[95149]: 2025-12-13 07:15:00.955472639 +0000 UTC m=+0.102478620 container start 968d2d5ad450dd8b3da2fb3bc46181b5b735fcee29b10ad271709823a14f2ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_jemison, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:15:00 np0005558317 podman[95149]: 2025-12-13 07:15:00.956692192 +0000 UTC m=+0.103698183 container attach 968d2d5ad450dd8b3da2fb3bc46181b5b735fcee29b10ad271709823a14f2ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 02:15:00 np0005558317 podman[95149]: 2025-12-13 07:15:00.869581977 +0000 UTC m=+0.016587978 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:15:01 np0005558317 epic_jemison[95162]: {
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:    "0": [
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:        {
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "devices": [
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "/dev/loop3"
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            ],
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "lv_name": "ceph_lv0",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "lv_size": "21470642176",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "name": "ceph_lv0",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "tags": {
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.cluster_name": "ceph",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.crush_device_class": "",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.encrypted": "0",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.objectstore": "bluestore",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.osd_id": "0",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.type": "block",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.vdo": "0",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.with_tpm": "0"
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            },
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "type": "block",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "vg_name": "ceph_vg0"
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:        }
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:    ],
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:    "1": [
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:        {
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "devices": [
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "/dev/loop4"
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            ],
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "lv_name": "ceph_lv1",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "lv_size": "21470642176",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "name": "ceph_lv1",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "tags": {
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.cluster_name": "ceph",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.crush_device_class": "",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.encrypted": "0",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.objectstore": "bluestore",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.osd_id": "1",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.type": "block",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.vdo": "0",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.with_tpm": "0"
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            },
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "type": "block",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "vg_name": "ceph_vg1"
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:        }
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:    ],
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:    "2": [
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:        {
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "devices": [
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "/dev/loop5"
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            ],
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "lv_name": "ceph_lv2",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "lv_size": "21470642176",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "name": "ceph_lv2",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "tags": {
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.cluster_name": "ceph",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.crush_device_class": "",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.encrypted": "0",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.objectstore": "bluestore",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.osd_id": "2",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.type": "block",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.vdo": "0",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:                "ceph.with_tpm": "0"
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            },
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "type": "block",
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:            "vg_name": "ceph_vg2"
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:        }
Dec 13 02:15:01 np0005558317 epic_jemison[95162]:    ]
Dec 13 02:15:01 np0005558317 epic_jemison[95162]: }
Dec 13 02:15:01 np0005558317 systemd[1]: libpod-968d2d5ad450dd8b3da2fb3bc46181b5b735fcee29b10ad271709823a14f2ec3.scope: Deactivated successfully.
Dec 13 02:15:01 np0005558317 conmon[95162]: conmon 968d2d5ad450dd8b3da2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-968d2d5ad450dd8b3da2fb3bc46181b5b735fcee29b10ad271709823a14f2ec3.scope/container/memory.events
Dec 13 02:15:01 np0005558317 podman[95149]: 2025-12-13 07:15:01.214712734 +0000 UTC m=+0.361718715 container died 968d2d5ad450dd8b3da2fb3bc46181b5b735fcee29b10ad271709823a14f2ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 02:15:01 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 13 02:15:01 np0005558317 podman[95149]: 2025-12-13 07:15:01.23786545 +0000 UTC m=+0.384871431 container remove 968d2d5ad450dd8b3da2fb3bc46181b5b735fcee29b10ad271709823a14f2ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:15:01 np0005558317 systemd[1]: libpod-conmon-968d2d5ad450dd8b3da2fb3bc46181b5b735fcee29b10ad271709823a14f2ec3.scope: Deactivated successfully.
Dec 13 02:15:01 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Dec 13 02:15:01 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Dec 13 02:15:01 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Dec 13 02:15:01 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Dec 13 02:15:01 np0005558317 python3[95225]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:15:01 np0005558317 systemd[1]: var-lib-containers-storage-overlay-2546699b44423471dabb82bda6c93d714d0739a991974b6cefa963ee20ec328f-merged.mount: Deactivated successfully.
Dec 13 02:15:01 np0005558317 podman[95258]: 2025-12-13 07:15:01.459113879 +0000 UTC m=+0.030331860 container create a5268b65ec7bf37d287a2ea2bfe54a05288ea687e5831d77e89536be0cc35191 (image=quay.io/ceph/ceph:v20, name=naughty_shannon, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 02:15:01 np0005558317 systemd[1]: Started libpod-conmon-a5268b65ec7bf37d287a2ea2bfe54a05288ea687e5831d77e89536be0cc35191.scope.
Dec 13 02:15:01 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:15:01 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b5df26e9aa3b239147935e011713e28d91f577546bb955c8dcae275beab691/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:01 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b5df26e9aa3b239147935e011713e28d91f577546bb955c8dcae275beab691/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:01 np0005558317 podman[95258]: 2025-12-13 07:15:01.513579468 +0000 UTC m=+0.084797459 container init a5268b65ec7bf37d287a2ea2bfe54a05288ea687e5831d77e89536be0cc35191 (image=quay.io/ceph/ceph:v20, name=naughty_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 02:15:01 np0005558317 podman[95258]: 2025-12-13 07:15:01.519220521 +0000 UTC m=+0.090438492 container start a5268b65ec7bf37d287a2ea2bfe54a05288ea687e5831d77e89536be0cc35191 (image=quay.io/ceph/ceph:v20, name=naughty_shannon, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:15:01 np0005558317 podman[95258]: 2025-12-13 07:15:01.520575928 +0000 UTC m=+0.091793899 container attach a5268b65ec7bf37d287a2ea2bfe54a05288ea687e5831d77e89536be0cc35191 (image=quay.io/ceph/ceph:v20, name=naughty_shannon, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:15:01 np0005558317 podman[95258]: 2025-12-13 07:15:01.44871206 +0000 UTC m=+0.019930050 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:15:01 np0005558317 podman[95284]: 2025-12-13 07:15:01.581692387 +0000 UTC m=+0.025996892 container create 1cdad2d7fbd7f8daf7b1d3e3c6f1961679d704b8c9ee6930fdcc10447ead451a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hugle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 02:15:01 np0005558317 systemd[1]: Started libpod-conmon-1cdad2d7fbd7f8daf7b1d3e3c6f1961679d704b8c9ee6930fdcc10447ead451a.scope.
Dec 13 02:15:01 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:15:01 np0005558317 podman[95284]: 2025-12-13 07:15:01.630201952 +0000 UTC m=+0.074506467 container init 1cdad2d7fbd7f8daf7b1d3e3c6f1961679d704b8c9ee6930fdcc10447ead451a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:15:01 np0005558317 podman[95284]: 2025-12-13 07:15:01.634195257 +0000 UTC m=+0.078499753 container start 1cdad2d7fbd7f8daf7b1d3e3c6f1961679d704b8c9ee6930fdcc10447ead451a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hugle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 02:15:01 np0005558317 suspicious_hugle[95310]: 167 167
Dec 13 02:15:01 np0005558317 podman[95284]: 2025-12-13 07:15:01.636932272 +0000 UTC m=+0.081236767 container attach 1cdad2d7fbd7f8daf7b1d3e3c6f1961679d704b8c9ee6930fdcc10447ead451a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hugle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 02:15:01 np0005558317 systemd[1]: libpod-1cdad2d7fbd7f8daf7b1d3e3c6f1961679d704b8c9ee6930fdcc10447ead451a.scope: Deactivated successfully.
Dec 13 02:15:01 np0005558317 podman[95284]: 2025-12-13 07:15:01.637755068 +0000 UTC m=+0.082059563 container died 1cdad2d7fbd7f8daf7b1d3e3c6f1961679d704b8c9ee6930fdcc10447ead451a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hugle, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 02:15:01 np0005558317 systemd[1]: var-lib-containers-storage-overlay-4d6708cd94a186475e5538ed68b23fb912e3cc149ae5c8a3762a148e32a29233-merged.mount: Deactivated successfully.
Dec 13 02:15:01 np0005558317 podman[95284]: 2025-12-13 07:15:01.656791517 +0000 UTC m=+0.101096012 container remove 1cdad2d7fbd7f8daf7b1d3e3c6f1961679d704b8c9ee6930fdcc10447ead451a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hugle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:15:01 np0005558317 podman[95284]: 2025-12-13 07:15:01.571391627 +0000 UTC m=+0.015696143 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:15:01 np0005558317 systemd[1]: libpod-conmon-1cdad2d7fbd7f8daf7b1d3e3c6f1961679d704b8c9ee6930fdcc10447ead451a.scope: Deactivated successfully.
Dec 13 02:15:01 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec 13 02:15:01 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Dec 13 02:15:01 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Dec 13 02:15:01 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec 13 02:15:01 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Dec 13 02:15:01 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 39 pg[10.0( empty local-lis/les=0/0 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:01 np0005558317 podman[95337]: 2025-12-13 07:15:01.768772988 +0000 UTC m=+0.026499006 container create 495fad931ae444e6135750b1615a413428edee74a92468cb3b9bdef2003a0cbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_albattani, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:15:01 np0005558317 systemd[1]: Started libpod-conmon-495fad931ae444e6135750b1615a413428edee74a92468cb3b9bdef2003a0cbe.scope.
Dec 13 02:15:01 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:15:01 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e474fa75a8a94b5f75cea44792aabd3f4465a74671c8d1f0fece73a598193ba4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:01 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e474fa75a8a94b5f75cea44792aabd3f4465a74671c8d1f0fece73a598193ba4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:01 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e474fa75a8a94b5f75cea44792aabd3f4465a74671c8d1f0fece73a598193ba4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:01 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e474fa75a8a94b5f75cea44792aabd3f4465a74671c8d1f0fece73a598193ba4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:01 np0005558317 podman[95337]: 2025-12-13 07:15:01.820115246 +0000 UTC m=+0.077841264 container init 495fad931ae444e6135750b1615a413428edee74a92468cb3b9bdef2003a0cbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_albattani, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:15:01 np0005558317 podman[95337]: 2025-12-13 07:15:01.824578185 +0000 UTC m=+0.082304202 container start 495fad931ae444e6135750b1615a413428edee74a92468cb3b9bdef2003a0cbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_albattani, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:15:01 np0005558317 podman[95337]: 2025-12-13 07:15:01.830930484 +0000 UTC m=+0.088656502 container attach 495fad931ae444e6135750b1615a413428edee74a92468cb3b9bdef2003a0cbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_albattani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:15:01 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 02:15:01 np0005558317 naughty_shannon[95270]: 
Dec 13 02:15:01 np0005558317 naughty_shannon[95270]: [{"container_id": "8e6a4f61ea03", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.21%", "created": "2025-12-13T07:13:55.466128Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-12-13T07:13:55.505998Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T07:14:59.309109Z", "memory_usage": 7803502, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2025-12-13T07:13:55.400266Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de@crash.compute-0", "version": "20.2.0"}, {"container_id": "c65ab07d188f", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "5.68%", "created": "2025-12-13T07:14:58.166394Z", "daemon_id": "cephfs.compute-0.zwnyoz", "daemon_name": "mds.cephfs.compute-0.zwnyoz", "daemon_type": "mds", "events": ["2025-12-13T07:14:58.207332Z daemon:mds.cephfs.compute-0.zwnyoz [INFO] \"Deployed mds.cephfs.compute-0.zwnyoz on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T07:14:59.309475Z", "memory_usage": 12645826, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2025-12-13T07:14:58.094347Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de@mds.cephfs.compute-0.zwnyoz", "version": "20.2.0"}, {"container_id": "4d78867918d5", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "18.35%", "created": "2025-12-13T07:13:23.817984Z", "daemon_id": "compute-0.qsherl", "daemon_name": "mgr.compute-0.qsherl", "daemon_type": "mgr", "events": ["2025-12-13T07:13:58.693619Z daemon:mgr.compute-0.qsherl [INFO] \"Reconfigured mgr.compute-0.qsherl on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T07:14:59.309038Z", "memory_usage": 546727526, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-12-13T07:13:23.760333Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de@mgr.compute-0.qsherl", "version": "20.2.0"}, {"container_id": "4656a144eefb", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.12%", "created": "2025-12-13T07:13:21.270972Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-12-13T07:13:58.201111Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T07:14:59.308939Z", "memory_request": 2147483648, "memory_usage": 40160460, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2025-12-13T07:13:22.585200Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de@mon.compute-0", "version": "20.2.0"}, {"container_id": "5e169e1385f9", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.28%", "created": "2025-12-13T07:14:12.157205Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-12-13T07:14:12.197502Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T07:14:59.309175Z", "memory_request": 4294967296, "memory_usage": 68105011, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-13T07:14:12.086798Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de@osd.0", "version": "20.2.0"}, {"container_id": "c0e0c03f97b0", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.38%", "created": "2025-12-13T07:14:15.231219Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-12-13T07:14:15.322829Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T07:14:59.309240Z", "memory_request": 4294967296, "memory_usage": 68105011, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-13T07:14:15.052351Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de@osd.1", "version": "20.2.0"}, {"container_id": "bb7cd2f636f6", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.47%", "created": "2025-12-13T07:14:18.155232Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-12-13T07:14:18.245140Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T07:14:59.309305Z", "memory_request": 4294967296, "memory_usage": 69688360, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-13T07:14:17.988018Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de@osd.2", "version": "20.2.0"}, {"container_id": "69ac193e949f", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac68
Dec 13 02:15:01 np0005558317 podman[95337]: 2025-12-13 07:15:01.757767953 +0000 UTC m=+0.015493991 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:15:01 np0005558317 systemd[1]: libpod-a5268b65ec7bf37d287a2ea2bfe54a05288ea687e5831d77e89536be0cc35191.scope: Deactivated successfully.
Dec 13 02:15:01 np0005558317 podman[95258]: 2025-12-13 07:15:01.858482728 +0000 UTC m=+0.429700699 container died a5268b65ec7bf37d287a2ea2bfe54a05288ea687e5831d77e89536be0cc35191 (image=quay.io/ceph/ceph:v20, name=naughty_shannon, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:15:01 np0005558317 podman[95258]: 2025-12-13 07:15:01.877781061 +0000 UTC m=+0.448999031 container remove a5268b65ec7bf37d287a2ea2bfe54a05288ea687e5831d77e89536be0cc35191 (image=quay.io/ceph/ceph:v20, name=naughty_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Dec 13 02:15:01 np0005558317 systemd[1]: libpod-conmon-a5268b65ec7bf37d287a2ea2bfe54a05288ea687e5831d77e89536be0cc35191.scope: Deactivated successfully.
Dec 13 02:15:02 np0005558317 ansible-async_wrapper.py[93568]: Done in kid B.
Dec 13 02:15:02 np0005558317 rsyslogd[962]: message too long (8842) with configured size 8096, begin of message is: [{"container_id": "8e6a4f61ea03", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 13 02:15:02 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v72: 196 pgs: 1 unknown, 1 creating+peering, 194 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 6 op/s
Dec 13 02:15:02 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Dec 13 02:15:02 np0005558317 lvm[95441]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:15:02 np0005558317 lvm[95440]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:15:02 np0005558317 lvm[95440]: VG ceph_vg0 finished
Dec 13 02:15:02 np0005558317 lvm[95441]: VG ceph_vg1 finished
Dec 13 02:15:02 np0005558317 lvm[95444]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:15:02 np0005558317 lvm[95444]: VG ceph_vg2 finished
Dec 13 02:15:02 np0005558317 systemd[1]: var-lib-containers-storage-overlay-c1b5df26e9aa3b239147935e011713e28d91f577546bb955c8dcae275beab691-merged.mount: Deactivated successfully.
Dec 13 02:15:02 np0005558317 condescending_albattani[95351]: {}
Dec 13 02:15:02 np0005558317 systemd[1]: libpod-495fad931ae444e6135750b1615a413428edee74a92468cb3b9bdef2003a0cbe.scope: Deactivated successfully.
Dec 13 02:15:02 np0005558317 systemd[1]: libpod-495fad931ae444e6135750b1615a413428edee74a92468cb3b9bdef2003a0cbe.scope: Consumed 1.007s CPU time.
Dec 13 02:15:02 np0005558317 podman[95337]: 2025-12-13 07:15:02.479921771 +0000 UTC m=+0.737647809 container died 495fad931ae444e6135750b1615a413428edee74a92468cb3b9bdef2003a0cbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 02:15:02 np0005558317 systemd[1]: var-lib-containers-storage-overlay-e474fa75a8a94b5f75cea44792aabd3f4465a74671c8d1f0fece73a598193ba4-merged.mount: Deactivated successfully.
Dec 13 02:15:02 np0005558317 podman[95337]: 2025-12-13 07:15:02.502679224 +0000 UTC m=+0.760405243 container remove 495fad931ae444e6135750b1615a413428edee74a92468cb3b9bdef2003a0cbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_albattani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:15:02 np0005558317 systemd[1]: libpod-conmon-495fad931ae444e6135750b1615a413428edee74a92468cb3b9bdef2003a0cbe.scope: Deactivated successfully.
Dec 13 02:15:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:15:02 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:15:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:15:02 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:15:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:15:02 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:15:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:15:02 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:15:02 np0005558317 python3[95494]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:15:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:15:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec 13 02:15:02 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 13 02:15:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Dec 13 02:15:02 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Dec 13 02:15:02 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 40 pg[10.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:02 np0005558317 podman[95557]: 2025-12-13 07:15:02.751809558 +0000 UTC m=+0.052679632 container create 1c75db0d9cd6774bc46fdd886fb79c6e4ce21c2c6a6dcf1d50f39727e4926d1c (image=quay.io/ceph/ceph:v20, name=busy_johnson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 02:15:02 np0005558317 systemd[1]: Started libpod-conmon-1c75db0d9cd6774bc46fdd886fb79c6e4ce21c2c6a6dcf1d50f39727e4926d1c.scope.
Dec 13 02:15:02 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:15:02 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d069fced84f6c1079ac6088d4302a6c53f633f3e354948f543bfdd122f73bbf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:02 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d069fced84f6c1079ac6088d4302a6c53f633f3e354948f543bfdd122f73bbf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:02 np0005558317 podman[95557]: 2025-12-13 07:15:02.809222225 +0000 UTC m=+0.110092300 container init 1c75db0d9cd6774bc46fdd886fb79c6e4ce21c2c6a6dcf1d50f39727e4926d1c (image=quay.io/ceph/ceph:v20, name=busy_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 02:15:02 np0005558317 podman[95557]: 2025-12-13 07:15:02.720344018 +0000 UTC m=+0.021214092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:15:02 np0005558317 podman[95557]: 2025-12-13 07:15:02.814585185 +0000 UTC m=+0.115455259 container start 1c75db0d9cd6774bc46fdd886fb79c6e4ce21c2c6a6dcf1d50f39727e4926d1c (image=quay.io/ceph/ceph:v20, name=busy_johnson, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:15:02 np0005558317 podman[95557]: 2025-12-13 07:15:02.823495642 +0000 UTC m=+0.124365736 container attach 1c75db0d9cd6774bc46fdd886fb79c6e4ce21c2c6a6dcf1d50f39727e4926d1c (image=quay.io/ceph/ceph:v20, name=busy_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 02:15:03 np0005558317 podman[95631]: 2025-12-13 07:15:03.04263049 +0000 UTC m=+0.037878725 container exec 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 02:15:03 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec 13 02:15:03 np0005558317 podman[95631]: 2025-12-13 07:15:03.124852036 +0000 UTC m=+0.120100271 container exec_died 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 02:15:03 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2572883331' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 13 02:15:03 np0005558317 busy_johnson[95571]: 
Dec 13 02:15:03 np0005558317 busy_johnson[95571]: {"fsid":"00fdae1b-7fad-5f1b-8734-ba4d9298a6de","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":100,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":40,"num_osds":3,"num_up_osds":3,"osd_up_since":1765610062,"num_in_osds":3,"osd_in_since":1765610047,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":194},{"state_name":"creating+peering","count":1},{"state_name":"unknown","count":1}],"num_pgs":196,"num_pools":10,"num_objects":13,"data_bytes":461030,"bytes_used":84062208,"bytes_avail":64327864320,"bytes_total":64411926528,"unknown_pgs_ratio":0.0051020407117903233,"inactive_pgs_ratio":0.0051020407117903233,"read_bytes_sec":1279,"write_bytes_sec":2047,"read_op_per_sec":0,"write_op_per_sec":5},"fsmap":{"epoch":5,"btime":"2025-12-13T07:15:00:227157+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.zwnyoz","status":"up:active","gid":14253}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-13T07:14:40.194918+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Dec 13 02:15:03 np0005558317 systemd[1]: libpod-1c75db0d9cd6774bc46fdd886fb79c6e4ce21c2c6a6dcf1d50f39727e4926d1c.scope: Deactivated successfully.
Dec 13 02:15:03 np0005558317 podman[95557]: 2025-12-13 07:15:03.216188855 +0000 UTC m=+0.517058949 container died 1c75db0d9cd6774bc46fdd886fb79c6e4ce21c2c6a6dcf1d50f39727e4926d1c (image=quay.io/ceph/ceph:v20, name=busy_johnson, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 02:15:03 np0005558317 systemd[1]: var-lib-containers-storage-overlay-3d069fced84f6c1079ac6088d4302a6c53f633f3e354948f543bfdd122f73bbf-merged.mount: Deactivated successfully.
Dec 13 02:15:03 np0005558317 podman[95557]: 2025-12-13 07:15:03.244457316 +0000 UTC m=+0.545327391 container remove 1c75db0d9cd6774bc46fdd886fb79c6e4ce21c2c6a6dcf1d50f39727e4926d1c (image=quay.io/ceph/ceph:v20, name=busy_johnson, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:15:03 np0005558317 systemd[1]: libpod-conmon-1c75db0d9cd6774bc46fdd886fb79c6e4ce21c2c6a6dcf1d50f39727e4926d1c.scope: Deactivated successfully.
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec 13 02:15:03 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Dec 13 02:15:03 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 41 pg[11.0( empty local-lis/les=0/0 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [1] r=0 lpr=41 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:03 np0005558317 podman[95876]: 2025-12-13 07:15:03.964759142 +0000 UTC m=+0.028617199 container create eacf135a8b57c2ee01f3c02f54d01f57cfd036b1dfd9680ece654170d75766f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_newton, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 02:15:03 np0005558317 systemd[1]: Started libpod-conmon-eacf135a8b57c2ee01f3c02f54d01f57cfd036b1dfd9680ece654170d75766f5.scope.
Dec 13 02:15:03 np0005558317 python3[95865]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:15:04 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:15:04 np0005558317 podman[95876]: 2025-12-13 07:15:04.022938572 +0000 UTC m=+0.086796648 container init eacf135a8b57c2ee01f3c02f54d01f57cfd036b1dfd9680ece654170d75766f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 02:15:04 np0005558317 podman[95876]: 2025-12-13 07:15:04.02788602 +0000 UTC m=+0.091744086 container start eacf135a8b57c2ee01f3c02f54d01f57cfd036b1dfd9680ece654170d75766f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_newton, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 02:15:04 np0005558317 podman[95876]: 2025-12-13 07:15:04.02890732 +0000 UTC m=+0.092765386 container attach eacf135a8b57c2ee01f3c02f54d01f57cfd036b1dfd9680ece654170d75766f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:15:04 np0005558317 strange_newton[95889]: 167 167
Dec 13 02:15:04 np0005558317 systemd[1]: libpod-eacf135a8b57c2ee01f3c02f54d01f57cfd036b1dfd9680ece654170d75766f5.scope: Deactivated successfully.
Dec 13 02:15:04 np0005558317 podman[95876]: 2025-12-13 07:15:04.030702203 +0000 UTC m=+0.094560270 container died eacf135a8b57c2ee01f3c02f54d01f57cfd036b1dfd9680ece654170d75766f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 02:15:04 np0005558317 podman[95892]: 2025-12-13 07:15:04.038157666 +0000 UTC m=+0.030654917 container create 2738ed6eece7fcff7791bba57d43b3e1eedbc0a0fa35f3859cd1105ba3c0c685 (image=quay.io/ceph/ceph:v20, name=frosty_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:15:04 np0005558317 systemd[1]: var-lib-containers-storage-overlay-5aefdd941f55b5a940c9715f5bbe1b1893aa7581f7fe0c4e32163b40831174da-merged.mount: Deactivated successfully.
Dec 13 02:15:04 np0005558317 podman[95876]: 2025-12-13 07:15:03.952648721 +0000 UTC m=+0.016506807 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:15:04 np0005558317 podman[95876]: 2025-12-13 07:15:04.055718271 +0000 UTC m=+0.119576337 container remove eacf135a8b57c2ee01f3c02f54d01f57cfd036b1dfd9680ece654170d75766f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_newton, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 02:15:04 np0005558317 systemd[1]: Started libpod-conmon-2738ed6eece7fcff7791bba57d43b3e1eedbc0a0fa35f3859cd1105ba3c0c685.scope.
Dec 13 02:15:04 np0005558317 systemd[1]: libpod-conmon-eacf135a8b57c2ee01f3c02f54d01f57cfd036b1dfd9680ece654170d75766f5.scope: Deactivated successfully.
Dec 13 02:15:04 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:15:04 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8ce6819b805382e51e1a277b98a340dce2bcd70050c824108be109229b7686/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:04 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8ce6819b805382e51e1a277b98a340dce2bcd70050c824108be109229b7686/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:04 np0005558317 podman[95892]: 2025-12-13 07:15:04.093228121 +0000 UTC m=+0.085725382 container init 2738ed6eece7fcff7791bba57d43b3e1eedbc0a0fa35f3859cd1105ba3c0c685 (image=quay.io/ceph/ceph:v20, name=frosty_mendeleev, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 02:15:04 np0005558317 podman[95892]: 2025-12-13 07:15:04.097241755 +0000 UTC m=+0.089739007 container start 2738ed6eece7fcff7791bba57d43b3e1eedbc0a0fa35f3859cd1105ba3c0c685 (image=quay.io/ceph/ceph:v20, name=frosty_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 02:15:04 np0005558317 podman[95892]: 2025-12-13 07:15:04.098479362 +0000 UTC m=+0.090976613 container attach 2738ed6eece7fcff7791bba57d43b3e1eedbc0a0fa35f3859cd1105ba3c0c685 (image=quay.io/ceph/ceph:v20, name=frosty_mendeleev, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:15:04 np0005558317 podman[95892]: 2025-12-13 07:15:04.025796533 +0000 UTC m=+0.018293804 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:15:04 np0005558317 podman[95926]: 2025-12-13 07:15:04.178857524 +0000 UTC m=+0.029008783 container create b4de93d0ecb6ab5f9aba5ed10ade272a03d3a85052deb5266728c21c68f0833a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_euclid, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 02:15:04 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v75: 197 pgs: 1 unknown, 1 creating+peering, 195 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Dec 13 02:15:04 np0005558317 systemd[1]: Started libpod-conmon-b4de93d0ecb6ab5f9aba5ed10ade272a03d3a85052deb5266728c21c68f0833a.scope.
Dec 13 02:15:04 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mds-cephfs-compute-0-zwnyoz[93860]: 2025-12-13T07:15:04.220+0000 7fe604a7f640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec 13 02:15:04 np0005558317 ceph-mds[93864]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec 13 02:15:04 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:15:04 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0905c8791ea007d786111054a7d97e3a47936d384a94cc447123b3d51ad6528d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:04 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0905c8791ea007d786111054a7d97e3a47936d384a94cc447123b3d51ad6528d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:04 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0905c8791ea007d786111054a7d97e3a47936d384a94cc447123b3d51ad6528d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:04 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0905c8791ea007d786111054a7d97e3a47936d384a94cc447123b3d51ad6528d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:04 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0905c8791ea007d786111054a7d97e3a47936d384a94cc447123b3d51ad6528d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:04 np0005558317 podman[95926]: 2025-12-13 07:15:04.255499753 +0000 UTC m=+0.105651013 container init b4de93d0ecb6ab5f9aba5ed10ade272a03d3a85052deb5266728c21c68f0833a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 02:15:04 np0005558317 podman[95926]: 2025-12-13 07:15:04.260626269 +0000 UTC m=+0.110777529 container start b4de93d0ecb6ab5f9aba5ed10ade272a03d3a85052deb5266728c21c68f0833a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_euclid, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:15:04 np0005558317 podman[95926]: 2025-12-13 07:15:04.261888321 +0000 UTC m=+0.112039581 container attach b4de93d0ecb6ab5f9aba5ed10ade272a03d3a85052deb5266728c21c68f0833a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_euclid, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 02:15:04 np0005558317 podman[95926]: 2025-12-13 07:15:04.166627457 +0000 UTC m=+0.016778737 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:15:04 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 13 02:15:04 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3886617627' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 02:15:04 np0005558317 frosty_mendeleev[95915]: 
Dec 13 02:15:04 np0005558317 systemd[1]: libpod-2738ed6eece7fcff7791bba57d43b3e1eedbc0a0fa35f3859cd1105ba3c0c685.scope: Deactivated successfully.
Dec 13 02:15:04 np0005558317 conmon[95915]: conmon 2738ed6eece7fcff7791 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2738ed6eece7fcff7791bba57d43b3e1eedbc0a0fa35f3859cd1105ba3c0c685.scope/container/memory.events
Dec 13 02:15:04 np0005558317 frosty_mendeleev[95915]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.kikquh","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Dec 13 02:15:04 np0005558317 podman[95892]: 2025-12-13 07:15:04.42728752 +0000 UTC m=+0.419784771 container died 2738ed6eece7fcff7791bba57d43b3e1eedbc0a0fa35f3859cd1105ba3c0c685 (image=quay.io/ceph/ceph:v20, name=frosty_mendeleev, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:15:04 np0005558317 systemd[1]: var-lib-containers-storage-overlay-fc8ce6819b805382e51e1a277b98a340dce2bcd70050c824108be109229b7686-merged.mount: Deactivated successfully.
Dec 13 02:15:04 np0005558317 podman[95892]: 2025-12-13 07:15:04.45974833 +0000 UTC m=+0.452245582 container remove 2738ed6eece7fcff7791bba57d43b3e1eedbc0a0fa35f3859cd1105ba3c0c685 (image=quay.io/ceph/ceph:v20, name=frosty_mendeleev, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Dec 13 02:15:04 np0005558317 systemd[1]: libpod-conmon-2738ed6eece7fcff7791bba57d43b3e1eedbc0a0fa35f3859cd1105ba3c0c685.scope: Deactivated successfully.
Dec 13 02:15:04 np0005558317 beautiful_euclid[95958]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:15:04 np0005558317 beautiful_euclid[95958]: --> All data devices are unavailable
Dec 13 02:15:04 np0005558317 systemd[1]: libpod-b4de93d0ecb6ab5f9aba5ed10ade272a03d3a85052deb5266728c21c68f0833a.scope: Deactivated successfully.
Dec 13 02:15:04 np0005558317 podman[95926]: 2025-12-13 07:15:04.633329911 +0000 UTC m=+0.483481170 container died b4de93d0ecb6ab5f9aba5ed10ade272a03d3a85052deb5266728c21c68f0833a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_euclid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 02:15:04 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:15:04 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:15:04 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:15:04 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:15:04 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:15:04 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Dec 13 02:15:04 np0005558317 podman[95926]: 2025-12-13 07:15:04.652070073 +0000 UTC m=+0.502221333 container remove b4de93d0ecb6ab5f9aba5ed10ade272a03d3a85052deb5266728c21c68f0833a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:15:04 np0005558317 systemd[1]: libpod-conmon-b4de93d0ecb6ab5f9aba5ed10ade272a03d3a85052deb5266728c21c68f0833a.scope: Deactivated successfully.
Dec 13 02:15:04 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec 13 02:15:04 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 13 02:15:04 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Dec 13 02:15:04 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Dec 13 02:15:04 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec 13 02:15:04 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Dec 13 02:15:04 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 42 pg[11.0( empty local-lis/les=41/42 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [1] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:04 np0005558317 systemd[1]: var-lib-containers-storage-overlay-0905c8791ea007d786111054a7d97e3a47936d384a94cc447123b3d51ad6528d-merged.mount: Deactivated successfully.
Dec 13 02:15:04 np0005558317 podman[96060]: 2025-12-13 07:15:04.986550553 +0000 UTC m=+0.025474029 container create 3319bbd468d7b54d327c59627e42872d255ba41a62d3c46a5dcefe1aebaaca71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_carver, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:15:05 np0005558317 systemd[1]: Started libpod-conmon-3319bbd468d7b54d327c59627e42872d255ba41a62d3c46a5dcefe1aebaaca71.scope.
Dec 13 02:15:05 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:15:05 np0005558317 podman[96060]: 2025-12-13 07:15:05.02654849 +0000 UTC m=+0.065471987 container init 3319bbd468d7b54d327c59627e42872d255ba41a62d3c46a5dcefe1aebaaca71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_carver, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:15:05 np0005558317 podman[96060]: 2025-12-13 07:15:05.030557515 +0000 UTC m=+0.069480992 container start 3319bbd468d7b54d327c59627e42872d255ba41a62d3c46a5dcefe1aebaaca71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_carver, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 02:15:05 np0005558317 podman[96060]: 2025-12-13 07:15:05.031621404 +0000 UTC m=+0.070544901 container attach 3319bbd468d7b54d327c59627e42872d255ba41a62d3c46a5dcefe1aebaaca71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_carver, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:15:05 np0005558317 fervent_carver[96073]: 167 167
Dec 13 02:15:05 np0005558317 systemd[1]: libpod-3319bbd468d7b54d327c59627e42872d255ba41a62d3c46a5dcefe1aebaaca71.scope: Deactivated successfully.
Dec 13 02:15:05 np0005558317 podman[96060]: 2025-12-13 07:15:05.033680444 +0000 UTC m=+0.072603931 container died 3319bbd468d7b54d327c59627e42872d255ba41a62d3c46a5dcefe1aebaaca71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_carver, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:15:05 np0005558317 systemd[1]: var-lib-containers-storage-overlay-e686fe38216b82c70785c5db1703dae835acef3fb64c20aa8a238a63a95b712e-merged.mount: Deactivated successfully.
Dec 13 02:15:05 np0005558317 podman[96060]: 2025-12-13 07:15:05.055111534 +0000 UTC m=+0.094035011 container remove 3319bbd468d7b54d327c59627e42872d255ba41a62d3c46a5dcefe1aebaaca71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_carver, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:15:05 np0005558317 podman[96060]: 2025-12-13 07:15:04.976760412 +0000 UTC m=+0.015683909 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:15:05 np0005558317 systemd[1]: libpod-conmon-3319bbd468d7b54d327c59627e42872d255ba41a62d3c46a5dcefe1aebaaca71.scope: Deactivated successfully.
Dec 13 02:15:05 np0005558317 podman[96120]: 2025-12-13 07:15:05.16799482 +0000 UTC m=+0.027879270 container create 31d05ac262c499f24bd2bf32bfccd96f11a33c0bba67502632d10bbc87ccdd77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 02:15:05 np0005558317 systemd[1]: Started libpod-conmon-31d05ac262c499f24bd2bf32bfccd96f11a33c0bba67502632d10bbc87ccdd77.scope.
Dec 13 02:15:05 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:15:05 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7190b0a9c2d5403aff95da9f6688be354bcadce99ba4e670afaf3422d4bc342/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:05 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7190b0a9c2d5403aff95da9f6688be354bcadce99ba4e670afaf3422d4bc342/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:05 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7190b0a9c2d5403aff95da9f6688be354bcadce99ba4e670afaf3422d4bc342/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:05 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7190b0a9c2d5403aff95da9f6688be354bcadce99ba4e670afaf3422d4bc342/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:05 np0005558317 python3[96114]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:15:05 np0005558317 podman[96120]: 2025-12-13 07:15:05.224938546 +0000 UTC m=+0.084822996 container init 31d05ac262c499f24bd2bf32bfccd96f11a33c0bba67502632d10bbc87ccdd77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_knuth, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:15:05 np0005558317 podman[96120]: 2025-12-13 07:15:05.230485732 +0000 UTC m=+0.090370182 container start 31d05ac262c499f24bd2bf32bfccd96f11a33c0bba67502632d10bbc87ccdd77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_knuth, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:15:05 np0005558317 podman[96120]: 2025-12-13 07:15:05.232477004 +0000 UTC m=+0.092361475 container attach 31d05ac262c499f24bd2bf32bfccd96f11a33c0bba67502632d10bbc87ccdd77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:15:05 np0005558317 podman[96136]: 2025-12-13 07:15:05.251755189 +0000 UTC m=+0.025031779 container create 79648eee50b5e3f79a91ec962890475807809921981eb35d629fa4b775bc124e (image=quay.io/ceph/ceph:v20, name=tender_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 02:15:05 np0005558317 podman[96120]: 2025-12-13 07:15:05.156984746 +0000 UTC m=+0.016869216 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:15:05 np0005558317 systemd[1]: Started libpod-conmon-79648eee50b5e3f79a91ec962890475807809921981eb35d629fa4b775bc124e.scope.
Dec 13 02:15:05 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:15:05 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/900766f18bd2f523432f604cd0e3f23922c067488872e1cf847bd6d6f8bd8479/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:05 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/900766f18bd2f523432f604cd0e3f23922c067488872e1cf847bd6d6f8bd8479/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:05 np0005558317 podman[96136]: 2025-12-13 07:15:05.294451166 +0000 UTC m=+0.067727756 container init 79648eee50b5e3f79a91ec962890475807809921981eb35d629fa4b775bc124e (image=quay.io/ceph/ceph:v20, name=tender_ptolemy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:15:05 np0005558317 podman[96136]: 2025-12-13 07:15:05.298801742 +0000 UTC m=+0.072078323 container start 79648eee50b5e3f79a91ec962890475807809921981eb35d629fa4b775bc124e (image=quay.io/ceph/ceph:v20, name=tender_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:15:05 np0005558317 podman[96136]: 2025-12-13 07:15:05.299953728 +0000 UTC m=+0.073230319 container attach 79648eee50b5e3f79a91ec962890475807809921981eb35d629fa4b775bc124e (image=quay.io/ceph/ceph:v20, name=tender_ptolemy, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:15:05 np0005558317 podman[96136]: 2025-12-13 07:15:05.242206982 +0000 UTC m=+0.015483592 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:15:05 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Dec 13 02:15:05 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]: {
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:    "0": [
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:        {
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "devices": [
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "/dev/loop3"
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            ],
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "lv_name": "ceph_lv0",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "lv_size": "21470642176",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "name": "ceph_lv0",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "tags": {
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.cluster_name": "ceph",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.crush_device_class": "",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.encrypted": "0",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.objectstore": "bluestore",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.osd_id": "0",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.type": "block",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.vdo": "0",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.with_tpm": "0"
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            },
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "type": "block",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "vg_name": "ceph_vg0"
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:        }
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:    ],
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:    "1": [
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:        {
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "devices": [
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "/dev/loop4"
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            ],
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "lv_name": "ceph_lv1",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "lv_size": "21470642176",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "name": "ceph_lv1",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "tags": {
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.cluster_name": "ceph",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.crush_device_class": "",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.encrypted": "0",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.objectstore": "bluestore",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.osd_id": "1",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.type": "block",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.vdo": "0",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.with_tpm": "0"
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            },
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "type": "block",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "vg_name": "ceph_vg1"
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:        }
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:    ],
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:    "2": [
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:        {
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "devices": [
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "/dev/loop5"
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            ],
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "lv_name": "ceph_lv2",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "lv_size": "21470642176",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "name": "ceph_lv2",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "tags": {
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.cluster_name": "ceph",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.crush_device_class": "",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.encrypted": "0",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.objectstore": "bluestore",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.osd_id": "2",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.type": "block",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.vdo": "0",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:                "ceph.with_tpm": "0"
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            },
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "type": "block",
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:            "vg_name": "ceph_vg2"
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:        }
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]:    ]
Dec 13 02:15:05 np0005558317 reverent_knuth[96133]: }
Dec 13 02:15:05 np0005558317 systemd[1]: libpod-31d05ac262c499f24bd2bf32bfccd96f11a33c0bba67502632d10bbc87ccdd77.scope: Deactivated successfully.
Dec 13 02:15:05 np0005558317 podman[96120]: 2025-12-13 07:15:05.482132656 +0000 UTC m=+0.342017107 container died 31d05ac262c499f24bd2bf32bfccd96f11a33c0bba67502632d10bbc87ccdd77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_knuth, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:15:05 np0005558317 podman[96120]: 2025-12-13 07:15:05.500350096 +0000 UTC m=+0.360234536 container remove 31d05ac262c499f24bd2bf32bfccd96f11a33c0bba67502632d10bbc87ccdd77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_knuth, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 02:15:05 np0005558317 systemd[1]: libpod-conmon-31d05ac262c499f24bd2bf32bfccd96f11a33c0bba67502632d10bbc87ccdd77.scope: Deactivated successfully.
Dec 13 02:15:05 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Dec 13 02:15:05 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3439360568' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Dec 13 02:15:05 np0005558317 tender_ptolemy[96150]: mimic
Dec 13 02:15:05 np0005558317 systemd[1]: libpod-79648eee50b5e3f79a91ec962890475807809921981eb35d629fa4b775bc124e.scope: Deactivated successfully.
Dec 13 02:15:05 np0005558317 podman[96136]: 2025-12-13 07:15:05.636176513 +0000 UTC m=+0.409453113 container died 79648eee50b5e3f79a91ec962890475807809921981eb35d629fa4b775bc124e (image=quay.io/ceph/ceph:v20, name=tender_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 02:15:05 np0005558317 podman[96136]: 2025-12-13 07:15:05.657128422 +0000 UTC m=+0.430405012 container remove 79648eee50b5e3f79a91ec962890475807809921981eb35d629fa4b775bc124e (image=quay.io/ceph/ceph:v20, name=tender_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:15:05 np0005558317 systemd[1]: libpod-conmon-79648eee50b5e3f79a91ec962890475807809921981eb35d629fa4b775bc124e.scope: Deactivated successfully.
Dec 13 02:15:05 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec 13 02:15:05 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 13 02:15:05 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Dec 13 02:15:05 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 13 02:15:05 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Dec 13 02:15:05 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Dec 13 02:15:05 np0005558317 radosgw[93487]: v1 topic migration: starting v1 topic migration..
Dec 13 02:15:05 np0005558317 radosgw[93487]: v1 topic migration: finished v1 topic migration
Dec 13 02:15:05 np0005558317 radosgw[93487]: framework: beast
Dec 13 02:15:05 np0005558317 radosgw[93487]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec 13 02:15:05 np0005558317 radosgw[93487]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec 13 02:15:05 np0005558317 radosgw[93487]: starting handler: beast
Dec 13 02:15:05 np0005558317 radosgw[93487]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 02:15:05 np0005558317 podman[96292]: 2025-12-13 07:15:05.884848646 +0000 UTC m=+0.029815570 container create fe578b8c1d8aabe387c4b9c0919d85cab96ec4d22b207c2ed5f99aa524d3d9f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 02:15:05 np0005558317 radosgw[93487]: mgrc service_daemon_register rgw.14256 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC 7763 64-Core Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.kikquh,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025,kernel_version=5.14.0-648.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865356,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=c41c06c0-96f4-44f4-8e75-f5ee0f887dbd,zone_name=default,zonegroup_id=3619564b-3f09-447d-be0a-4c55dcbaaf7a,zonegroup_name=default}
Dec 13 02:15:05 np0005558317 systemd[1]: Started libpod-conmon-fe578b8c1d8aabe387c4b9c0919d85cab96ec4d22b207c2ed5f99aa524d3d9f1.scope.
Dec 13 02:15:05 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:15:05 np0005558317 podman[96292]: 2025-12-13 07:15:05.938833931 +0000 UTC m=+0.083800846 container init fe578b8c1d8aabe387c4b9c0919d85cab96ec4d22b207c2ed5f99aa524d3d9f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:15:05 np0005558317 podman[96292]: 2025-12-13 07:15:05.943744251 +0000 UTC m=+0.088711165 container start fe578b8c1d8aabe387c4b9c0919d85cab96ec4d22b207c2ed5f99aa524d3d9f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 02:15:05 np0005558317 podman[96292]: 2025-12-13 07:15:05.94548906 +0000 UTC m=+0.090455994 container attach fe578b8c1d8aabe387c4b9c0919d85cab96ec4d22b207c2ed5f99aa524d3d9f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_rosalind, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:15:05 np0005558317 condescending_rosalind[96307]: 167 167
Dec 13 02:15:05 np0005558317 systemd[1]: libpod-fe578b8c1d8aabe387c4b9c0919d85cab96ec4d22b207c2ed5f99aa524d3d9f1.scope: Deactivated successfully.
Dec 13 02:15:05 np0005558317 podman[96292]: 2025-12-13 07:15:05.947391787 +0000 UTC m=+0.092358701 container died fe578b8c1d8aabe387c4b9c0919d85cab96ec4d22b207c2ed5f99aa524d3d9f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:15:05 np0005558317 podman[96292]: 2025-12-13 07:15:05.964180291 +0000 UTC m=+0.109147205 container remove fe578b8c1d8aabe387c4b9c0919d85cab96ec4d22b207c2ed5f99aa524d3d9f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 02:15:05 np0005558317 podman[96292]: 2025-12-13 07:15:05.871913004 +0000 UTC m=+0.016879937 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:15:05 np0005558317 systemd[1]: var-lib-containers-storage-overlay-900766f18bd2f523432f604cd0e3f23922c067488872e1cf847bd6d6f8bd8479-merged.mount: Deactivated successfully.
Dec 13 02:15:05 np0005558317 systemd[1]: var-lib-containers-storage-overlay-a7190b0a9c2d5403aff95da9f6688be354bcadce99ba4e670afaf3422d4bc342-merged.mount: Deactivated successfully.
Dec 13 02:15:05 np0005558317 systemd[1]: libpod-conmon-fe578b8c1d8aabe387c4b9c0919d85cab96ec4d22b207c2ed5f99aa524d3d9f1.scope: Deactivated successfully.
Dec 13 02:15:06 np0005558317 podman[96329]: 2025-12-13 07:15:06.080929121 +0000 UTC m=+0.027185306 container create 3d288f3e0a36780861944b5c72e10cbdc1fbcd0bddca4bf538c3bfe553db005d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 02:15:06 np0005558317 systemd[1]: Started libpod-conmon-3d288f3e0a36780861944b5c72e10cbdc1fbcd0bddca4bf538c3bfe553db005d.scope.
Dec 13 02:15:06 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:15:06 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/938aeea6205abe4969fd9a285121a40421e002a6826158c5445d46885c36da2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:06 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/938aeea6205abe4969fd9a285121a40421e002a6826158c5445d46885c36da2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:06 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/938aeea6205abe4969fd9a285121a40421e002a6826158c5445d46885c36da2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:06 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/938aeea6205abe4969fd9a285121a40421e002a6826158c5445d46885c36da2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:06 np0005558317 podman[96329]: 2025-12-13 07:15:06.147989232 +0000 UTC m=+0.094245416 container init 3d288f3e0a36780861944b5c72e10cbdc1fbcd0bddca4bf538c3bfe553db005d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_jackson, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:15:06 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Dec 13 02:15:06 np0005558317 podman[96329]: 2025-12-13 07:15:06.153314921 +0000 UTC m=+0.099571105 container start 3d288f3e0a36780861944b5c72e10cbdc1fbcd0bddca4bf538c3bfe553db005d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 02:15:06 np0005558317 podman[96329]: 2025-12-13 07:15:06.154341392 +0000 UTC m=+0.100597575 container attach 3d288f3e0a36780861944b5c72e10cbdc1fbcd0bddca4bf538c3bfe553db005d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_jackson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 02:15:06 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Dec 13 02:15:06 np0005558317 podman[96329]: 2025-12-13 07:15:06.070195187 +0000 UTC m=+0.016451391 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:15:06 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v78: 197 pgs: 197 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 3.7 KiB/s wr, 9 op/s
Dec 13 02:15:06 np0005558317 python3[96373]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:15:06 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Dec 13 02:15:06 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Dec 13 02:15:06 np0005558317 podman[96387]: 2025-12-13 07:15:06.469578644 +0000 UTC m=+0.055579634 container create aa1848a2fb05384fcf987cc6606ea5f9c2521db6d7d3e25c376b3bb0b85c6f04 (image=quay.io/ceph/ceph:v20, name=zealous_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:15:06 np0005558317 systemd[1]: Started libpod-conmon-aa1848a2fb05384fcf987cc6606ea5f9c2521db6d7d3e25c376b3bb0b85c6f04.scope.
Dec 13 02:15:06 np0005558317 podman[96387]: 2025-12-13 07:15:06.444013533 +0000 UTC m=+0.030014543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:15:06 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:15:06 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a0011bc1cb1c9b79bd71167ca41677cea2c9e2b93969dc7cef470a6f88b7e09/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:06 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a0011bc1cb1c9b79bd71167ca41677cea2c9e2b93969dc7cef470a6f88b7e09/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:06 np0005558317 podman[96387]: 2025-12-13 07:15:06.551657141 +0000 UTC m=+0.137658152 container init aa1848a2fb05384fcf987cc6606ea5f9c2521db6d7d3e25c376b3bb0b85c6f04 (image=quay.io/ceph/ceph:v20, name=zealous_benz, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 02:15:06 np0005558317 podman[96387]: 2025-12-13 07:15:06.556491737 +0000 UTC m=+0.142492728 container start aa1848a2fb05384fcf987cc6606ea5f9c2521db6d7d3e25c376b3bb0b85c6f04 (image=quay.io/ceph/ceph:v20, name=zealous_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 02:15:06 np0005558317 podman[96387]: 2025-12-13 07:15:06.557528917 +0000 UTC m=+0.143529927 container attach aa1848a2fb05384fcf987cc6606ea5f9c2521db6d7d3e25c376b3bb0b85c6f04 (image=quay.io/ceph/ceph:v20, name=zealous_benz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:15:06 np0005558317 ceph-mon[74928]: from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 13 02:15:06 np0005558317 lvm[96482]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:15:06 np0005558317 lvm[96482]: VG ceph_vg1 finished
Dec 13 02:15:06 np0005558317 lvm[96481]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:15:06 np0005558317 lvm[96481]: VG ceph_vg0 finished
Dec 13 02:15:06 np0005558317 lvm[96485]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:15:06 np0005558317 lvm[96485]: VG ceph_vg2 finished
Dec 13 02:15:06 np0005558317 focused_jackson[96343]: {}
Dec 13 02:15:06 np0005558317 systemd[1]: libpod-3d288f3e0a36780861944b5c72e10cbdc1fbcd0bddca4bf538c3bfe553db005d.scope: Deactivated successfully.
Dec 13 02:15:06 np0005558317 podman[96329]: 2025-12-13 07:15:06.780117395 +0000 UTC m=+0.726373579 container died 3d288f3e0a36780861944b5c72e10cbdc1fbcd0bddca4bf538c3bfe553db005d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_jackson, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:15:06 np0005558317 systemd[1]: var-lib-containers-storage-overlay-938aeea6205abe4969fd9a285121a40421e002a6826158c5445d46885c36da2d-merged.mount: Deactivated successfully.
Dec 13 02:15:06 np0005558317 podman[96329]: 2025-12-13 07:15:06.802927538 +0000 UTC m=+0.749183721 container remove 3d288f3e0a36780861944b5c72e10cbdc1fbcd0bddca4bf538c3bfe553db005d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:15:06 np0005558317 systemd[1]: libpod-conmon-3d288f3e0a36780861944b5c72e10cbdc1fbcd0bddca4bf538c3bfe553db005d.scope: Deactivated successfully.
Dec 13 02:15:06 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:15:06 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:15:06 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:15:06 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:15:06 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Dec 13 02:15:06 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2763273209' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Dec 13 02:15:06 np0005558317 zealous_benz[96431]: 
Dec 13 02:15:06 np0005558317 zealous_benz[96431]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"rgw":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":7}}
Dec 13 02:15:06 np0005558317 systemd[1]: libpod-aa1848a2fb05384fcf987cc6606ea5f9c2521db6d7d3e25c376b3bb0b85c6f04.scope: Deactivated successfully.
Dec 13 02:15:06 np0005558317 podman[96387]: 2025-12-13 07:15:06.982469539 +0000 UTC m=+0.568470529 container died aa1848a2fb05384fcf987cc6606ea5f9c2521db6d7d3e25c376b3bb0b85c6f04 (image=quay.io/ceph/ceph:v20, name=zealous_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 02:15:06 np0005558317 systemd[1]: var-lib-containers-storage-overlay-9a0011bc1cb1c9b79bd71167ca41677cea2c9e2b93969dc7cef470a6f88b7e09-merged.mount: Deactivated successfully.
Dec 13 02:15:07 np0005558317 podman[96387]: 2025-12-13 07:15:07.002423593 +0000 UTC m=+0.588424583 container remove aa1848a2fb05384fcf987cc6606ea5f9c2521db6d7d3e25c376b3bb0b85c6f04 (image=quay.io/ceph/ceph:v20, name=zealous_benz, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:15:07 np0005558317 systemd[1]: libpod-conmon-aa1848a2fb05384fcf987cc6606ea5f9c2521db6d7d3e25c376b3bb0b85c6f04.scope: Deactivated successfully.
Dec 13 02:15:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:15:07 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:15:07 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:15:08 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v79: 197 pgs: 197 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 186 B/s rd, 2.7 KiB/s wr, 4 op/s
Dec 13 02:15:08 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec 13 02:15:08 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec 13 02:15:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:15:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:15:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:15:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:15:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:15:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:15:10 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v80: 197 pgs: 197 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 158 B/s rd, 316 B/s wr, 1 op/s
Dec 13 02:15:12 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v81: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 8.2 KiB/s wr, 178 op/s
Dec 13 02:15:12 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Dec 13 02:15:12 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Dec 13 02:15:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:15:14 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v82: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 7.0 KiB/s wr, 150 op/s
Dec 13 02:15:14 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Dec 13 02:15:14 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Dec 13 02:15:15 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Dec 13 02:15:15 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Dec 13 02:15:16 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Dec 13 02:15:16 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Dec 13 02:15:16 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v83: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 6.1 KiB/s wr, 135 op/s
Dec 13 02:15:17 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Dec 13 02:15:17 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Dec 13 02:15:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:15:18 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Dec 13 02:15:18 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Dec 13 02:15:18 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v84: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Dec 13 02:15:18 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Dec 13 02:15:18 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Dec 13 02:15:19 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Dec 13 02:15:19 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Dec 13 02:15:20 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v85: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Dec 13 02:15:20 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.c scrub starts
Dec 13 02:15:20 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.c scrub ok
Dec 13 02:15:21 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Dec 13 02:15:21 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Dec 13 02:15:22 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v86: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Dec 13 02:15:22 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Dec 13 02:15:22 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Dec 13 02:15:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:15:23 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Dec 13 02:15:23 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Dec 13 02:15:24 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v87: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:15:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.e scrub starts
Dec 13 02:15:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.e scrub ok
Dec 13 02:15:26 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Dec 13 02:15:26 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Dec 13 02:15:26 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v88: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:15:27 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Dec 13 02:15:27 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Dec 13 02:15:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:15:28 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v89: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:15:28 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Dec 13 02:15:28 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Dec 13 02:15:29 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Dec 13 02:15:29 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Dec 13 02:15:30 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v90: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:15:30 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Dec 13 02:15:30 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Dec 13 02:15:31 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Dec 13 02:15:31 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Dec 13 02:15:31 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.a scrub starts
Dec 13 02:15:31 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.a scrub ok
Dec 13 02:15:32 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v91: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:15:32 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.c scrub starts
Dec 13 02:15:32 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.c scrub ok
Dec 13 02:15:32 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Dec 13 02:15:32 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Dec 13 02:15:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:15:33 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Dec 13 02:15:33 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Dec 13 02:15:33 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.b scrub starts
Dec 13 02:15:33 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.b scrub ok
Dec 13 02:15:34 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v92: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:15:35 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Dec 13 02:15:35 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Dec 13 02:15:36 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v93: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:15:37 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Dec 13 02:15:37 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Dec 13 02:15:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:15:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:15:38
Dec 13 02:15:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:15:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 02:15:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'default.rgw.log', 'backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'images']
Dec 13 02:15:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:15:38 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v94: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:15:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:15:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:15:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:15:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:15:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:15:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:15:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:15:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:15:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:15:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:15:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:15:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:15:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:15:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:15:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:15:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:15:39 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.b scrub starts
Dec 13 02:15:39 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.b scrub ok
Dec 13 02:15:40 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v95: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:15:40 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Dec 13 02:15:40 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Dec 13 02:15:40 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Dec 13 02:15:40 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Dec 13 02:15:41 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Dec 13 02:15:41 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Dec 13 02:15:42 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v96: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:15:42 np0005558317 python3[96557]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:15:42 np0005558317 podman[96558]: 2025-12-13 07:15:42.552617714 +0000 UTC m=+0.027368757 container create 1c14586c1853758710efe9d110425a7561d13c7c97c91fd77db0dd229a02110f (image=quay.io/ceph/ceph:v20, name=unruffled_mendel, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:15:42 np0005558317 systemd[1]: Started libpod-conmon-1c14586c1853758710efe9d110425a7561d13c7c97c91fd77db0dd229a02110f.scope.
Dec 13 02:15:42 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:15:42 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c18e4cc1d91ff0b26569a29e00356b155b7f940e37a97f098740ef70bb3ab69e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:42 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c18e4cc1d91ff0b26569a29e00356b155b7f940e37a97f098740ef70bb3ab69e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:42 np0005558317 podman[96558]: 2025-12-13 07:15:42.606847105 +0000 UTC m=+0.081598157 container init 1c14586c1853758710efe9d110425a7561d13c7c97c91fd77db0dd229a02110f (image=quay.io/ceph/ceph:v20, name=unruffled_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 02:15:42 np0005558317 podman[96558]: 2025-12-13 07:15:42.611776776 +0000 UTC m=+0.086527818 container start 1c14586c1853758710efe9d110425a7561d13c7c97c91fd77db0dd229a02110f (image=quay.io/ceph/ceph:v20, name=unruffled_mendel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:15:42 np0005558317 podman[96558]: 2025-12-13 07:15:42.613071211 +0000 UTC m=+0.087822253 container attach 1c14586c1853758710efe9d110425a7561d13c7c97c91fd77db0dd229a02110f (image=quay.io/ceph/ceph:v20, name=unruffled_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec 13 02:15:42 np0005558317 podman[96558]: 2025-12-13 07:15:42.542156523 +0000 UTC m=+0.016907566 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:15:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:15:42 np0005558317 unruffled_mendel[96570]: could not fetch user info: no user info saved
Dec 13 02:15:42 np0005558317 systemd[1]: libpod-1c14586c1853758710efe9d110425a7561d13c7c97c91fd77db0dd229a02110f.scope: Deactivated successfully.
Dec 13 02:15:42 np0005558317 conmon[96570]: conmon 1c14586c1853758710ef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1c14586c1853758710efe9d110425a7561d13c7c97c91fd77db0dd229a02110f.scope/container/memory.events
Dec 13 02:15:42 np0005558317 podman[96558]: 2025-12-13 07:15:42.715718657 +0000 UTC m=+0.190469699 container died 1c14586c1853758710efe9d110425a7561d13c7c97c91fd77db0dd229a02110f (image=quay.io/ceph/ceph:v20, name=unruffled_mendel, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:15:42 np0005558317 systemd[1]: var-lib-containers-storage-overlay-c18e4cc1d91ff0b26569a29e00356b155b7f940e37a97f098740ef70bb3ab69e-merged.mount: Deactivated successfully.
Dec 13 02:15:42 np0005558317 podman[96558]: 2025-12-13 07:15:42.732929789 +0000 UTC m=+0.207680830 container remove 1c14586c1853758710efe9d110425a7561d13c7c97c91fd77db0dd229a02110f (image=quay.io/ceph/ceph:v20, name=unruffled_mendel, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:15:42 np0005558317 systemd[1]: libpod-conmon-1c14586c1853758710efe9d110425a7561d13c7c97c91fd77db0dd229a02110f.scope: Deactivated successfully.
Dec 13 02:15:42 np0005558317 python3[96691]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:15:43 np0005558317 podman[96692]: 2025-12-13 07:15:43.006800788 +0000 UTC m=+0.029476394 container create 4216caef7ee7bafcf8d389fa852b8712bcb450b15d77af16c07d3b885ed752d2 (image=quay.io/ceph/ceph:v20, name=heuristic_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 02:15:43 np0005558317 systemd[1]: Started libpod-conmon-4216caef7ee7bafcf8d389fa852b8712bcb450b15d77af16c07d3b885ed752d2.scope.
Dec 13 02:15:43 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:15:43 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa05481457bf40776ab14d0c022c296355100c1609d3f319dc2a011c6948befe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:43 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa05481457bf40776ab14d0c022c296355100c1609d3f319dc2a011c6948befe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:15:43 np0005558317 podman[96692]: 2025-12-13 07:15:43.05142532 +0000 UTC m=+0.074100936 container init 4216caef7ee7bafcf8d389fa852b8712bcb450b15d77af16c07d3b885ed752d2 (image=quay.io/ceph/ceph:v20, name=heuristic_lichterman, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:15:43 np0005558317 podman[96692]: 2025-12-13 07:15:43.055394453 +0000 UTC m=+0.078070059 container start 4216caef7ee7bafcf8d389fa852b8712bcb450b15d77af16c07d3b885ed752d2 (image=quay.io/ceph/ceph:v20, name=heuristic_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:15:43 np0005558317 podman[96692]: 2025-12-13 07:15:43.056447615 +0000 UTC m=+0.079123221 container attach 4216caef7ee7bafcf8d389fa852b8712bcb450b15d77af16c07d3b885ed752d2 (image=quay.io/ceph/ceph:v20, name=heuristic_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 02:15:43 np0005558317 podman[96692]: 2025-12-13 07:15:42.994223443 +0000 UTC m=+0.016899059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]: {
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    "user_id": "openstack",
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    "display_name": "openstack",
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    "email": "",
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    "suspended": 0,
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    "max_buckets": 1000,
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    "subusers": [],
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    "keys": [
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:        {
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:            "user": "openstack",
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:            "access_key": "NBMMPVNSN1MZ8JU3B7M3",
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:            "secret_key": "CRUCrylGoRevblZZMdJcJtQm0MBioJUeDl2glffW",
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:            "active": true,
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:            "create_date": "2025-12-13T07:15:43.145899Z"
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:        }
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    ],
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    "swift_keys": [],
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    "caps": [],
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    "op_mask": "read, write, delete",
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    "default_placement": "",
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    "default_storage_class": "",
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    "placement_tags": [],
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    "bucket_quota": {
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:        "enabled": false,
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:        "check_on_raw": false,
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:        "max_size": -1,
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:        "max_size_kb": 0,
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:        "max_objects": -1
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    },
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    "user_quota": {
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:        "enabled": false,
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:        "check_on_raw": false,
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:        "max_size": -1,
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:        "max_size_kb": 0,
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:        "max_objects": -1
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    },
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    "temp_url_keys": [],
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    "type": "rgw",
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    "mfa_ids": [],
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    "account_id": "",
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    "path": "/",
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    "create_date": "2025-12-13T07:15:43.145708Z",
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    "tags": [],
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]:    "group_ids": []
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]: }
Dec 13 02:15:43 np0005558317 heuristic_lichterman[96704]: 
Dec 13 02:15:43 np0005558317 systemd[1]: libpod-4216caef7ee7bafcf8d389fa852b8712bcb450b15d77af16c07d3b885ed752d2.scope: Deactivated successfully.
Dec 13 02:15:43 np0005558317 conmon[96704]: conmon 4216caef7ee7bafcf8d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4216caef7ee7bafcf8d389fa852b8712bcb450b15d77af16c07d3b885ed752d2.scope/container/memory.events
Dec 13 02:15:43 np0005558317 podman[96692]: 2025-12-13 07:15:43.163323369 +0000 UTC m=+0.185998975 container died 4216caef7ee7bafcf8d389fa852b8712bcb450b15d77af16c07d3b885ed752d2 (image=quay.io/ceph/ceph:v20, name=heuristic_lichterman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 02:15:43 np0005558317 systemd[1]: var-lib-containers-storage-overlay-aa05481457bf40776ab14d0c022c296355100c1609d3f319dc2a011c6948befe-merged.mount: Deactivated successfully.
Dec 13 02:15:43 np0005558317 podman[96692]: 2025-12-13 07:15:43.181107536 +0000 UTC m=+0.203783142 container remove 4216caef7ee7bafcf8d389fa852b8712bcb450b15d77af16c07d3b885ed752d2 (image=quay.io/ceph/ceph:v20, name=heuristic_lichterman, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 02:15:43 np0005558317 systemd[76210]: Starting Mark boot as successful...
Dec 13 02:15:43 np0005558317 systemd[76210]: Finished Mark boot as successful.
Dec 13 02:15:43 np0005558317 systemd[1]: libpod-conmon-4216caef7ee7bafcf8d389fa852b8712bcb450b15d77af16c07d3b885ed752d2.scope: Deactivated successfully.
Dec 13 02:15:43 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Dec 13 02:15:43 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Dec 13 02:15:44 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v97: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:15:44 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.d scrub starts
Dec 13 02:15:44 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.d scrub ok
Dec 13 02:15:44 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.b scrub starts
Dec 13 02:15:44 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.b scrub ok
Dec 13 02:15:44 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:15:44 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:15:44 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:15:44 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:15:44 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:15:44 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:15:44 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:15:44 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:15:44 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:15:44 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:15:44 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:15:44 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:15:44 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.445826042658682e-07 of space, bias 4.0, pg target 0.0008934991251190418 quantized to 16 (current 32)
Dec 13 02:15:44 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:15:44 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:15:44 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:15:44 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Dec 13 02:15:44 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:15:44 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 1)
Dec 13 02:15:44 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:15:44 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 02:15:44 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:15:44 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 02:15:44 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Dec 13 02:15:44 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:15:45 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.e scrub starts
Dec 13 02:15:45 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.e scrub ok
Dec 13 02:15:45 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec 13 02:15:45 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:15:45 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:15:45 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Dec 13 02:15:45 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Dec 13 02:15:45 np0005558317 ceph-mgr[75200]: [progress INFO root] update: starting ev 12420bfd-0b2b-436d-86b0-cbce48bccd77 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 13 02:15:45 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Dec 13 02:15:45 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:15:46 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v99: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 28 op/s
Dec 13 02:15:46 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 02:15:46 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:15:46 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec 13 02:15:46 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:15:46 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:15:46 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Dec 13 02:15:46 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:15:46 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:15:46 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:15:46 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 45 pg[8.0( v 36'6 (0'0,36'6] local-lis/les=35/36 n=6 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=45 pruub=8.345492363s) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 36'5 mlcod 36'5 active pruub 99.056396484s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:46 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Dec 13 02:15:46 np0005558317 ceph-mgr[75200]: [progress INFO root] update: starting ev 7f57f5cd-035c-4a65-8d09-7296c792b467 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 13 02:15:46 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Dec 13 02:15:46 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:15:46 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 45 pg[8.0( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=45 pruub=8.345492363s) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 36'5 mlcod 0'0 unknown pruub 99.056396484s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:46 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x560fe1347b00) split_cache   moving buffer(0x560fe0176e00 space 0x560fe0549440 0x0~424 clean)
Dec 13 02:15:46 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x560fe1347b00) split_cache   moving buffer(0x560fdec95f80 space 0x560fdfa54240 0x0~1b4 clean)
Dec 13 02:15:46 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x560fe1347b00) split_cache   moving buffer(0x560fe0181d00 space 0x560fe1852840 0x0~2e clean)
Dec 13 02:15:46 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x560fe1347b00) split_cache   moving buffer(0x560fe01fd900 space 0x560fe13b1740 0x0~2e clean)
Dec 13 02:15:47 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.d scrub starts
Dec 13 02:15:47 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.d scrub ok
Dec 13 02:15:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec 13 02:15:47 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:15:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1a( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.12( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.13( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1e( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.11( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.19( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.4( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=1 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.18( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.5( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=1 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.6( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=1 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.7( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.9( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.a( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.8( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.e( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.3( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=1 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=1 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1( v 36'6 (0'0,36'6] local-lis/les=35/36 n=1 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.10( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.17( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.16( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.15( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.14( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:47 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-mgr[75200]: [progress INFO root] update: starting ev 9aa03648-618e-4794-8c1f-b13a271ecad7 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 13 02:15:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Dec 13 02:15:47 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.19( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.5( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.0( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 36'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.7( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.8( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.3( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.13( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.17( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.16( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:47 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:15:47 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:15:47 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:15:47 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Dec 13 02:15:47 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Dec 13 02:15:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:15:48 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v102: 228 pgs: 31 unknown, 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 46 op/s
Dec 13 02:15:48 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 02:15:48 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:15:48 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 02:15:48 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.b scrub starts
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.b scrub ok
Dec 13 02:15:48 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec 13 02:15:48 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:15:48 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:15:48 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:15:48 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 47 pg[9.0( v 43'551 (0'0,43'551] local-lis/les=37/38 n=210 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=47 pruub=8.344427109s) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 43'550 mlcod 43'550 active pruub 101.061424255s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:48 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Dec 13 02:15:48 np0005558317 ceph-mgr[75200]: [progress INFO root] update: starting ev adb4db96-1a14-4bf8-a24a-e6cee7f34630 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 13 02:15:48 np0005558317 ceph-mgr[75200]: [progress INFO root] complete: finished ev 12420bfd-0b2b-436d-86b0-cbce48bccd77 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 13 02:15:48 np0005558317 ceph-mgr[75200]: [progress INFO root] Completed event 12420bfd-0b2b-436d-86b0-cbce48bccd77 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Dec 13 02:15:48 np0005558317 ceph-mgr[75200]: [progress INFO root] complete: finished ev 7f57f5cd-035c-4a65-8d09-7296c792b467 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 13 02:15:48 np0005558317 ceph-mgr[75200]: [progress INFO root] Completed event 7f57f5cd-035c-4a65-8d09-7296c792b467 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Dec 13 02:15:48 np0005558317 ceph-mgr[75200]: [progress INFO root] complete: finished ev 9aa03648-618e-4794-8c1f-b13a271ecad7 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 13 02:15:48 np0005558317 ceph-mgr[75200]: [progress INFO root] Completed event 9aa03648-618e-4794-8c1f-b13a271ecad7 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Dec 13 02:15:48 np0005558317 ceph-mgr[75200]: [progress INFO root] complete: finished ev adb4db96-1a14-4bf8-a24a-e6cee7f34630 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 13 02:15:48 np0005558317 ceph-mgr[75200]: [progress INFO root] Completed event adb4db96-1a14-4bf8-a24a-e6cee7f34630 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Dec 13 02:15:48 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:15:48 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:15:48 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:15:48 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:15:48 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:15:48 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:15:48 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 47 pg[9.0( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=47 pruub=8.344427109s) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 43'550 mlcod 0'0 unknown pruub 101.061424255s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00aa480 space 0x560fe1afab40 0x0~9a clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0181100 space 0x560fe1a47440 0x0~9a clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00aab80 space 0x560fe1a4d140 0x0~98 clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe01f0b00 space 0x560fe1aa2840 0x0~9a clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019b480 space 0x560fe1aa4840 0x0~9a clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00b8200 space 0x560fe1809140 0x0~9a clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0193000 space 0x560fdf5d2b40 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0192580 space 0x560fe17f1a40 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019bf80 space 0x560fe17ec540 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0192980 space 0x560fe17f0840 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019b500 space 0x560fe17f9740 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0180100 space 0x560fe18b1d40 0x0~1c clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fdde7c300 space 0x560fdf5d2240 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe012a300 space 0x560fe1a93140 0x0~9a clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0192780 space 0x560fe17f1140 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe01f1600 space 0x560fe1a78b40 0x0~98 clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019b300 space 0x560fe1800240 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019b080 space 0x560fe1aa5740 0x0~9a clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0187600 space 0x560fe1a11740 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe01f0180 space 0x560fe1801d40 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0192e00 space 0x560fdf5d3440 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0187c00 space 0x560fe1803140 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019b100 space 0x560fe1800b40 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe01fd580 space 0x560fe17b1740 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019b700 space 0x560fe17f8e40 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe016a080 space 0x560fe1a6a240 0x0~98 clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0192c00 space 0x560fdf5d3d40 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00ab800 space 0x560fe1aa5140 0x0~9a clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00b8e00 space 0x560fdfa55a40 0x0~98 clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00aa380 space 0x560fe1a93a40 0x0~9a clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019ad80 space 0x560fe174ce40 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019b880 space 0x560fe1a92840 0x0~9a clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019ab00 space 0x560fe1801440 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fdfea2700 space 0x560fe1acb140 0x0~9a clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00aaa80 space 0x560fe13b1440 0x0~9a clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0192380 space 0x560fe17bf140 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00aa100 space 0x560fe1a47140 0x0~98 clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019bd00 space 0x560fe17ece40 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019bb00 space 0x560fdf797440 0x0~98 clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0187480 space 0x560fe1a4ce40 0x0~9a clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00b8800 space 0x560fe1801140 0x0~9a clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019bf00 space 0x560fe17ed740 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019b900 space 0x560fe17f8540 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00aad00 space 0x560fe1797a40 0x0~9a clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe012aa00 space 0x560fe17b0e40 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fdfe9b280 space 0x560fe1aa3140 0x0~9a clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0187880 space 0x560fe156c540 0x0~9a clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00b8980 space 0x560fe1852540 0x0~98 clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe01f0800 space 0x560fe17b0540 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00aa200 space 0x560fdf797140 0x0~9a clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0187f80 space 0x560fe1a10540 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00ab100 space 0x560fe179e240 0x0~9a clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fdffe1d80 space 0x560fe1802840 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe016bc00 space 0x560fe1853740 0x0~9a clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe016b680 space 0x560fe17bfa40 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0192500 space 0x560fe1796e40 0x0~9a clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0187800 space 0x560fe1a10e40 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0192180 space 0x560fe17be840 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0203e00 space 0x560fe1a5c840 0x0~98 clean)
Dec 13 02:15:48 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0187d80 space 0x560fe1803a40 0x0~6e clean)
Dec 13 02:15:48 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Dec 13 02:15:48 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Dec 13 02:15:48 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 47 pg[10.0( v 43'66 (0'0,43'66] local-lis/les=39/40 n=9 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=47 pruub=9.963310242s) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 43'65 mlcod 43'65 active pruub 100.238204956s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:48 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 47 pg[10.0( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=47 pruub=9.963310242s) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 43'65 mlcod 0'0 unknown pruub 100.238204956s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-mgr[75200]: [progress INFO root] Writing back 16 completed events
Dec 13 02:15:49 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 13 02:15:49 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec 13 02:15:49 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec 13 02:15:49 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.15( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.14( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.17( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.16( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.11( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.3( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.2( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.d( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.c( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.9( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.b( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.e( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.a( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.8( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.6( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.7( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.4( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.5( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1a( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.18( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.19( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1e( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1c( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1d( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.12( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.10( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1b( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.13( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.14( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.0( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 43'550 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.2( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.a( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.4( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.5( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1a( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.12( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.10( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.0( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 43'65 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:49 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:50 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v105: 290 pgs: 93 unknown, 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:15:50 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 02:15:50 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:15:50 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Dec 13 02:15:50 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Dec 13 02:15:50 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec 13 02:15:50 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:15:50 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Dec 13 02:15:50 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:15:50 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Dec 13 02:15:50 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 49 pg[11.0( v 43'2 (0'0,43'2] local-lis/les=41/42 n=2 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=49 pruub=10.106846809s) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 43'1 mlcod 43'1 active pruub 105.078475952s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:50 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 49 pg[11.0( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=49 pruub=10.106846809s) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 43'1 mlcod 0'0 unknown pruub 105.078475952s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Dec 13 02:15:51 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec 13 02:15:51 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:15:51 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Dec 13 02:15:51 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.17( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.16( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.15( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.14( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.13( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.2( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=1 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=41/42 n=1 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.d( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.c( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.8( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.a( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.3( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.4( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.6( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.5( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.7( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.18( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1a( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1d( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1c( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.11( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.10( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.12( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.16( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.13( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.0( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 43'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.19( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.5( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.7( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:52 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v108: 321 pgs: 62 unknown, 259 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:15:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:15:53 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec 13 02:15:53 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec 13 02:15:54 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v109: 321 pgs: 31 unknown, 290 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:15:55 np0005558317 systemd-logind[745]: New session 33 of user zuul.
Dec 13 02:15:55 np0005558317 systemd[1]: Started Session 33 of User zuul.
Dec 13 02:15:55 np0005558317 python3.9[96955]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:15:56 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v110: 321 pgs: 321 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:15:56 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 02:15:56 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:15:56 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 02:15:56 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:15:56 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Dec 13 02:15:56 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Dec 13 02:15:56 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 02:15:56 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:15:56 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec 13 02:15:56 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:15:56 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:15:56 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Dec 13 02:15:56 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:15:56 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:15:56 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:15:56 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 13 02:15:56 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:15:56 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Dec 13 02:15:56 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863642693s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896354675s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863589287s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896354675s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863616943s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896415710s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863591194s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863552094s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 active pruub 106.896423340s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863612175s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 active pruub 106.896522522s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863510132s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896423340s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863586426s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896522522s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.865627289s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.898643494s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.865616798s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898643494s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863466263s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896522522s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863451004s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896522522s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863352776s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896545410s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863326073s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896537781s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863339424s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896545410s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863314629s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896537781s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.864120483s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.897537231s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.864109993s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897537231s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863996506s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.897552490s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863986015s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897552490s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863911629s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.897605896s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863902092s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897605896s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862752914s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896553040s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862736702s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896553040s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863280296s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 active pruub 106.897605896s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863158226s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.897590637s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863239288s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897605896s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863135338s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897590637s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862961769s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 active pruub 106.897644043s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863029480s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.897651672s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862827301s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897651672s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862700462s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 active pruub 106.897689819s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862665176s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.897674561s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862675667s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897689819s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862626076s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897674561s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862836838s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 active pruub 106.898017883s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862815857s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.898017883s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862802505s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.898086548s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862786293s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898086548s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862939835s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897644043s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862488747s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.898033142s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862475395s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898033142s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.860626221s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896415710s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.860574722s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.844452858s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718498230s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.844369888s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718498230s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854372978s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.729553223s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854341507s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.729553223s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.843180656s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718505859s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.843167305s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718505859s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.872672081s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748138428s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854546547s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730064392s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.872431755s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748138428s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854331970s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730064392s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.841082573s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718490601s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.841072083s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852649689s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730140686s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852640152s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730140686s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870583534s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748161316s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870573044s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748161316s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870308876s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748184204s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870298386s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748184204s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.840455055s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718482971s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.840443611s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718482971s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852058411s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730171204s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852048874s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730171204s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869616508s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748191833s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869601250s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839732170s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718414307s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839722633s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718414307s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.851435661s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730194092s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.851426125s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730194092s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869359016s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748191833s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869349480s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839279175s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718368530s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839269638s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869021416s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748207092s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868998528s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748207092s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839054108s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718353271s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839038849s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718353271s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850866318s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730262756s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850856781s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730262756s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868752480s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748222351s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868742943s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748222351s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850701332s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730285645s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850690842s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868589401s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748245239s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868580818s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748245239s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850529671s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730285645s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850520134s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838470459s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718345642s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838461876s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718345642s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868296623s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748268127s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868269920s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748268127s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838410378s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718490601s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838400841s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838177681s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718376160s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838152885s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718376160s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867964745s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748283386s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867954254s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748283386s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849902153s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730323792s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849894524s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730323792s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867810249s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748298645s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867800713s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748298645s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.837768555s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718368530s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.837759018s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849675179s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730346680s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849659920s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730346680s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868008614s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748779297s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867999077s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867127419s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748153687s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867115974s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748153687s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849251747s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 43'551 active pruub 109.730361938s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849235535s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 43'551 unknown NOTIFY pruub 109.730361938s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867587090s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748779297s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867555618s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836945534s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718269348s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836936951s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718269348s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867397308s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748802185s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867383003s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748802185s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867756844s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.749267578s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867748260s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749267578s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848821640s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730400085s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848813057s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730400085s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867094994s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748840332s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867082596s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748840332s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836301804s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718193054s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836292267s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718193054s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867068291s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.749168396s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867057800s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749168396s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835968971s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718185425s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835960388s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718185425s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866882324s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.749176025s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866874695s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749176025s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848036766s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730422974s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848023415s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730422974s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835701942s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718177795s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835692406s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718177795s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866633415s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.749198914s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866624832s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749198914s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866514206s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.749183655s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866498947s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749183655s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.832565308s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.715339661s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.832555771s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715339661s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848815918s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.731674194s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848808289s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.731674194s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866249084s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.749206543s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866222382s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749206543s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835205078s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718284607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835196495s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866911888s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.750068665s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866901398s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.750068665s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.847294807s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730545044s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.847285271s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730545044s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.834938049s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718284607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.834763527s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.846091270s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730407715s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.846067429s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730407715s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.830695152s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.715332031s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.830681801s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715332031s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.833575249s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718284607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.833558083s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.863280296s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748130798s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:56 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.863265038s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748130798s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.1( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.4( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.6( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.14( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.10( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.19( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:56 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.17( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:57 np0005558317 python3.9[97173]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:15:57 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Dec 13 02:15:57 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Dec 13 02:15:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec 13 02:15:57 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:15:57 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:15:57 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 13 02:15:57 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:15:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec 13 02:15:57 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:57 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=51/52 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.e( v 48'67 lc 43'54 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.d( v 48'67 lc 43'55 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.6( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.15( v 48'67 lc 43'53 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.9( v 48'67 lc 43'58 (0'0,48'67] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:57 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=51/52 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:57 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 lc 43'17 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 lc 43'57 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:15:58 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec 13 02:15:58 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v113: 321 pgs: 16 unknown, 32 peering, 273 active+clean; 457 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:15:58 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec 13 02:15:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec 13 02:15:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec 13 02:15:58 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec 13 02:15:58 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:58 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:58 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:58 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:58 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:58 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:58 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:58 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:58 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:58 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:58 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:58 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:58 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:58 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:58 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:58 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:15:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec 13 02:15:59 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec 13 02:15:59 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327981949s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.118011475s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327939034s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118011475s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327603340s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117866516s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327571869s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117866516s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327441216s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117881775s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327405930s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117881775s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326743126s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117576599s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326668739s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117576599s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326932907s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117958069s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326913834s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326637268s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117897034s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326610565s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117897034s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323873520s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.116912842s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324602127s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117736816s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324528694s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117736816s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324680328s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.118041992s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324645996s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118041992s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323390007s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116912842s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324029922s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117660522s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323875427s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117660522s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323761940s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117958069s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323680878s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117927551s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323638916s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117927551s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323557854s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.322302818s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.116996765s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323290825s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.118003845s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323190689s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118003845s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:15:59 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.322153091s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116996765s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:00 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v116: 321 pgs: 16 unknown, 32 peering, 273 active+clean; 457 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 364 B/s, 0 objects/s recovering
Dec 13 02:16:00 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec 13 02:16:00 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec 13 02:16:00 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec 13 02:16:00 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec 13 02:16:00 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec 13 02:16:00 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec 13 02:16:00 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec 13 02:16:00 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.320955276s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 active pruub 119.118865967s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:00 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.320901871s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118865967s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:00 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.319732666s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 active pruub 119.118080139s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:00 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.319633484s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 unknown NOTIFY pruub 119.118080139s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:00 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:00 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 pct=0'0 crt=48'552 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:00 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:00 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=48'552 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:00 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:00 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:00 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:00 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:00 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:00 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:00 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:00 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:00 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:00 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:00 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:00 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:00 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:00 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:01 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Dec 13 02:16:01 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Dec 13 02:16:01 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec 13 02:16:01 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec 13 02:16:01 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec 13 02:16:01 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=55/56 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=53'554 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:01 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:02 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v119: 321 pgs: 14 unknown, 2 active+remapped, 32 peering, 273 active+clean; 458 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1023 B/s wr, 26 op/s; 600 B/s, 4 objects/s recovering
Dec 13 02:16:02 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Dec 13 02:16:02 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Dec 13 02:16:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:16:04 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v120: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 5.7 KiB/s wr, 145 op/s; 832 B/s, 19 objects/s recovering
Dec 13 02:16:04 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Dec 13 02:16:04 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Dec 13 02:16:04 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Dec 13 02:16:04 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Dec 13 02:16:04 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Dec 13 02:16:04 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Dec 13 02:16:04 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 13 02:16:04 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Dec 13 02:16:04 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Dec 13 02:16:05 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 13 02:16:06 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v122: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 5.5 KiB/s wr, 140 op/s; 802 B/s, 18 objects/s recovering
Dec 13 02:16:06 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Dec 13 02:16:06 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Dec 13 02:16:06 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Dec 13 02:16:06 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Dec 13 02:16:06 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 13 02:16:06 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Dec 13 02:16:06 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Dec 13 02:16:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:16:07 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:16:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:16:07 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:16:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:16:07 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:16:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:16:07 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:16:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:16:07 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:16:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:16:07 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:16:07 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Dec 13 02:16:07 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Dec 13 02:16:07 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 13 02:16:07 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:16:07 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:16:07 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:16:07 np0005558317 podman[97319]: 2025-12-13 07:16:07.673424865 +0000 UTC m=+0.026791015 container create 82ea14481616421d807138becbfb647ea372aea4638e40c0b8852bb2bb8a79bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_leavitt, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 02:16:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:16:07 np0005558317 systemd[1]: Started libpod-conmon-82ea14481616421d807138becbfb647ea372aea4638e40c0b8852bb2bb8a79bf.scope.
Dec 13 02:16:07 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:16:07 np0005558317 podman[97319]: 2025-12-13 07:16:07.736997894 +0000 UTC m=+0.090364043 container init 82ea14481616421d807138becbfb647ea372aea4638e40c0b8852bb2bb8a79bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_leavitt, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:16:07 np0005558317 podman[97319]: 2025-12-13 07:16:07.742792655 +0000 UTC m=+0.096158805 container start 82ea14481616421d807138becbfb647ea372aea4638e40c0b8852bb2bb8a79bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_leavitt, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 02:16:07 np0005558317 podman[97319]: 2025-12-13 07:16:07.745947824 +0000 UTC m=+0.099313994 container attach 82ea14481616421d807138becbfb647ea372aea4638e40c0b8852bb2bb8a79bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:16:07 np0005558317 adoring_leavitt[97332]: 167 167
Dec 13 02:16:07 np0005558317 systemd[1]: libpod-82ea14481616421d807138becbfb647ea372aea4638e40c0b8852bb2bb8a79bf.scope: Deactivated successfully.
Dec 13 02:16:07 np0005558317 conmon[97332]: conmon 82ea14481616421d8071 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82ea14481616421d807138becbfb647ea372aea4638e40c0b8852bb2bb8a79bf.scope/container/memory.events
Dec 13 02:16:07 np0005558317 podman[97319]: 2025-12-13 07:16:07.747842543 +0000 UTC m=+0.101208693 container died 82ea14481616421d807138becbfb647ea372aea4638e40c0b8852bb2bb8a79bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_leavitt, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:16:07 np0005558317 systemd[1]: var-lib-containers-storage-overlay-e4ed489840e4930b4483950a8efbdec4fb2cf7c847380cae6b1f6b508f69a1e6-merged.mount: Deactivated successfully.
Dec 13 02:16:07 np0005558317 podman[97319]: 2025-12-13 07:16:07.662713444 +0000 UTC m=+0.016079615 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:16:07 np0005558317 podman[97319]: 2025-12-13 07:16:07.769295809 +0000 UTC m=+0.122661960 container remove 82ea14481616421d807138becbfb647ea372aea4638e40c0b8852bb2bb8a79bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:16:07 np0005558317 systemd[1]: libpod-conmon-82ea14481616421d807138becbfb647ea372aea4638e40c0b8852bb2bb8a79bf.scope: Deactivated successfully.
Dec 13 02:16:07 np0005558317 podman[97353]: 2025-12-13 07:16:07.880013639 +0000 UTC m=+0.028154688 container create 08e9aef1c643ede25a77b736b7cdd5d57dd1d9c6fedb024dbca4445cb2e0da04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_lovelace, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:16:07 np0005558317 systemd[1]: Started libpod-conmon-08e9aef1c643ede25a77b736b7cdd5d57dd1d9c6fedb024dbca4445cb2e0da04.scope.
Dec 13 02:16:07 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:16:07 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d4864f05d50444d8163148a303843deffddf72be31335b50198f3df52dc9b1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:16:07 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d4864f05d50444d8163148a303843deffddf72be31335b50198f3df52dc9b1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:16:07 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d4864f05d50444d8163148a303843deffddf72be31335b50198f3df52dc9b1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:16:07 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d4864f05d50444d8163148a303843deffddf72be31335b50198f3df52dc9b1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:16:07 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d4864f05d50444d8163148a303843deffddf72be31335b50198f3df52dc9b1b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:16:07 np0005558317 podman[97353]: 2025-12-13 07:16:07.936944829 +0000 UTC m=+0.085085888 container init 08e9aef1c643ede25a77b736b7cdd5d57dd1d9c6fedb024dbca4445cb2e0da04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_lovelace, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 02:16:07 np0005558317 podman[97353]: 2025-12-13 07:16:07.942492709 +0000 UTC m=+0.090633758 container start 08e9aef1c643ede25a77b736b7cdd5d57dd1d9c6fedb024dbca4445cb2e0da04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_lovelace, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 02:16:07 np0005558317 podman[97353]: 2025-12-13 07:16:07.943623257 +0000 UTC m=+0.091764305 container attach 08e9aef1c643ede25a77b736b7cdd5d57dd1d9c6fedb024dbca4445cb2e0da04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 02:16:07 np0005558317 podman[97353]: 2025-12-13 07:16:07.867958984 +0000 UTC m=+0.016100053 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:16:08 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v124: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 4.3 KiB/s wr, 109 op/s; 572 B/s, 13 objects/s recovering
Dec 13 02:16:08 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Dec 13 02:16:08 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Dec 13 02:16:08 np0005558317 friendly_lovelace[97366]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:16:08 np0005558317 friendly_lovelace[97366]: --> All data devices are unavailable
Dec 13 02:16:08 np0005558317 systemd[1]: libpod-08e9aef1c643ede25a77b736b7cdd5d57dd1d9c6fedb024dbca4445cb2e0da04.scope: Deactivated successfully.
Dec 13 02:16:08 np0005558317 podman[97353]: 2025-12-13 07:16:08.313035031 +0000 UTC m=+0.461176081 container died 08e9aef1c643ede25a77b736b7cdd5d57dd1d9c6fedb024dbca4445cb2e0da04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 02:16:08 np0005558317 systemd[1]: var-lib-containers-storage-overlay-5d4864f05d50444d8163148a303843deffddf72be31335b50198f3df52dc9b1b-merged.mount: Deactivated successfully.
Dec 13 02:16:08 np0005558317 podman[97353]: 2025-12-13 07:16:08.33514591 +0000 UTC m=+0.483286959 container remove 08e9aef1c643ede25a77b736b7cdd5d57dd1d9c6fedb024dbca4445cb2e0da04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_lovelace, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 02:16:08 np0005558317 systemd[1]: libpod-conmon-08e9aef1c643ede25a77b736b7cdd5d57dd1d9c6fedb024dbca4445cb2e0da04.scope: Deactivated successfully.
Dec 13 02:16:08 np0005558317 systemd[1]: session-33.scope: Deactivated successfully.
Dec 13 02:16:08 np0005558317 systemd[1]: session-33.scope: Consumed 1.380s CPU time.
Dec 13 02:16:08 np0005558317 systemd-logind[745]: Session 33 logged out. Waiting for processes to exit.
Dec 13 02:16:08 np0005558317 systemd-logind[745]: Removed session 33.
Dec 13 02:16:08 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Dec 13 02:16:08 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 13 02:16:08 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Dec 13 02:16:08 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Dec 13 02:16:08 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Dec 13 02:16:08 np0005558317 podman[97480]: 2025-12-13 07:16:08.675813424 +0000 UTC m=+0.025264566 container create 66fdaa6548e1fe584fd276461a08a983c36d77d0a76c135c4d0d9a562120012c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:16:08 np0005558317 systemd[1]: Started libpod-conmon-66fdaa6548e1fe584fd276461a08a983c36d77d0a76c135c4d0d9a562120012c.scope.
Dec 13 02:16:08 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:16:08 np0005558317 podman[97480]: 2025-12-13 07:16:08.725914943 +0000 UTC m=+0.075366096 container init 66fdaa6548e1fe584fd276461a08a983c36d77d0a76c135c4d0d9a562120012c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_lewin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:16:08 np0005558317 podman[97480]: 2025-12-13 07:16:08.730249981 +0000 UTC m=+0.079701114 container start 66fdaa6548e1fe584fd276461a08a983c36d77d0a76c135c4d0d9a562120012c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:16:08 np0005558317 podman[97480]: 2025-12-13 07:16:08.731628574 +0000 UTC m=+0.081079705 container attach 66fdaa6548e1fe584fd276461a08a983c36d77d0a76c135c4d0d9a562120012c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_lewin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:16:08 np0005558317 modest_lewin[97493]: 167 167
Dec 13 02:16:08 np0005558317 systemd[1]: libpod-66fdaa6548e1fe584fd276461a08a983c36d77d0a76c135c4d0d9a562120012c.scope: Deactivated successfully.
Dec 13 02:16:08 np0005558317 conmon[97493]: conmon 66fdaa6548e1fe584fd2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-66fdaa6548e1fe584fd276461a08a983c36d77d0a76c135c4d0d9a562120012c.scope/container/memory.events
Dec 13 02:16:08 np0005558317 podman[97480]: 2025-12-13 07:16:08.733744667 +0000 UTC m=+0.083195819 container died 66fdaa6548e1fe584fd276461a08a983c36d77d0a76c135c4d0d9a562120012c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_lewin, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:16:08 np0005558317 systemd[1]: var-lib-containers-storage-overlay-454d080702d7dd9f66664ae4b36265d3a4e8624610064527c805c16d1d1e4939-merged.mount: Deactivated successfully.
Dec 13 02:16:08 np0005558317 podman[97480]: 2025-12-13 07:16:08.751877556 +0000 UTC m=+0.101328688 container remove 66fdaa6548e1fe584fd276461a08a983c36d77d0a76c135c4d0d9a562120012c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_lewin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 02:16:08 np0005558317 podman[97480]: 2025-12-13 07:16:08.665763223 +0000 UTC m=+0.015214375 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:16:08 np0005558317 systemd[1]: libpod-conmon-66fdaa6548e1fe584fd276461a08a983c36d77d0a76c135c4d0d9a562120012c.scope: Deactivated successfully.
Dec 13 02:16:08 np0005558317 podman[97515]: 2025-12-13 07:16:08.868168342 +0000 UTC m=+0.028788025 container create 773b59918a2ef59f9d993e0cc6ea4a7c164c262b81b25115a45e793f4a9b67c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_herschel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:16:08 np0005558317 systemd[1]: Started libpod-conmon-773b59918a2ef59f9d993e0cc6ea4a7c164c262b81b25115a45e793f4a9b67c6.scope.
Dec 13 02:16:08 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:16:08 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a17fc286d2740a991f563d140f2a066445d8a1deecdae4887b0f5740db3cfb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:16:08 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a17fc286d2740a991f563d140f2a066445d8a1deecdae4887b0f5740db3cfb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:16:08 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a17fc286d2740a991f563d140f2a066445d8a1deecdae4887b0f5740db3cfb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:16:08 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a17fc286d2740a991f563d140f2a066445d8a1deecdae4887b0f5740db3cfb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:16:08 np0005558317 podman[97515]: 2025-12-13 07:16:08.923922958 +0000 UTC m=+0.084542661 container init 773b59918a2ef59f9d993e0cc6ea4a7c164c262b81b25115a45e793f4a9b67c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_herschel, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:16:08 np0005558317 podman[97515]: 2025-12-13 07:16:08.929302383 +0000 UTC m=+0.089922066 container start 773b59918a2ef59f9d993e0cc6ea4a7c164c262b81b25115a45e793f4a9b67c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Dec 13 02:16:08 np0005558317 podman[97515]: 2025-12-13 07:16:08.930394287 +0000 UTC m=+0.091013970 container attach 773b59918a2ef59f9d993e0cc6ea4a7c164c262b81b25115a45e793f4a9b67c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:16:08 np0005558317 podman[97515]: 2025-12-13 07:16:08.855566602 +0000 UTC m=+0.016186305 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:16:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:16:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:16:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:16:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:16:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:16:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]: {
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:    "0": [
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:        {
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "devices": [
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "/dev/loop3"
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            ],
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "lv_name": "ceph_lv0",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "lv_size": "21470642176",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "name": "ceph_lv0",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "tags": {
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.cluster_name": "ceph",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.crush_device_class": "",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.encrypted": "0",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.objectstore": "bluestore",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.osd_id": "0",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.type": "block",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.vdo": "0",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.with_tpm": "0"
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            },
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "type": "block",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "vg_name": "ceph_vg0"
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:        }
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:    ],
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:    "1": [
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:        {
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "devices": [
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "/dev/loop4"
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            ],
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "lv_name": "ceph_lv1",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "lv_size": "21470642176",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "name": "ceph_lv1",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "tags": {
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.cluster_name": "ceph",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.crush_device_class": "",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.encrypted": "0",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.objectstore": "bluestore",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.osd_id": "1",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.type": "block",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.vdo": "0",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.with_tpm": "0"
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            },
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "type": "block",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "vg_name": "ceph_vg1"
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:        }
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:    ],
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:    "2": [
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:        {
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "devices": [
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "/dev/loop5"
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            ],
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "lv_name": "ceph_lv2",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "lv_size": "21470642176",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "name": "ceph_lv2",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "tags": {
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.cluster_name": "ceph",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.crush_device_class": "",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.encrypted": "0",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.objectstore": "bluestore",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.osd_id": "2",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.type": "block",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.vdo": "0",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:                "ceph.with_tpm": "0"
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            },
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "type": "block",
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:            "vg_name": "ceph_vg2"
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:        }
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]:    ]
Dec 13 02:16:09 np0005558317 inspiring_herschel[97528]: }
Dec 13 02:16:09 np0005558317 systemd[1]: libpod-773b59918a2ef59f9d993e0cc6ea4a7c164c262b81b25115a45e793f4a9b67c6.scope: Deactivated successfully.
Dec 13 02:16:09 np0005558317 podman[97537]: 2025-12-13 07:16:09.208375514 +0000 UTC m=+0.018011083 container died 773b59918a2ef59f9d993e0cc6ea4a7c164c262b81b25115a45e793f4a9b67c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3)
Dec 13 02:16:09 np0005558317 systemd[1]: var-lib-containers-storage-overlay-0a17fc286d2740a991f563d140f2a066445d8a1deecdae4887b0f5740db3cfb7-merged.mount: Deactivated successfully.
Dec 13 02:16:09 np0005558317 podman[97537]: 2025-12-13 07:16:09.22901131 +0000 UTC m=+0.038646879 container remove 773b59918a2ef59f9d993e0cc6ea4a7c164c262b81b25115a45e793f4a9b67c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 02:16:09 np0005558317 systemd[1]: libpod-conmon-773b59918a2ef59f9d993e0cc6ea4a7c164c262b81b25115a45e793f4a9b67c6.scope: Deactivated successfully.
Dec 13 02:16:09 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 13 02:16:09 np0005558317 podman[97610]: 2025-12-13 07:16:09.580054105 +0000 UTC m=+0.027718152 container create 443c021b8be5369bd1c86f219ca6880cd08780cbde94a6e3e80db68737f3a366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:16:09 np0005558317 systemd[1]: Started libpod-conmon-443c021b8be5369bd1c86f219ca6880cd08780cbde94a6e3e80db68737f3a366.scope.
Dec 13 02:16:09 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:16:09 np0005558317 podman[97610]: 2025-12-13 07:16:09.63955658 +0000 UTC m=+0.087220626 container init 443c021b8be5369bd1c86f219ca6880cd08780cbde94a6e3e80db68737f3a366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jennings, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:16:09 np0005558317 podman[97610]: 2025-12-13 07:16:09.644405721 +0000 UTC m=+0.092069757 container start 443c021b8be5369bd1c86f219ca6880cd08780cbde94a6e3e80db68737f3a366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jennings, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:16:09 np0005558317 podman[97610]: 2025-12-13 07:16:09.645662865 +0000 UTC m=+0.093326921 container attach 443c021b8be5369bd1c86f219ca6880cd08780cbde94a6e3e80db68737f3a366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jennings, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:16:09 np0005558317 ecstatic_jennings[97624]: 167 167
Dec 13 02:16:09 np0005558317 systemd[1]: libpod-443c021b8be5369bd1c86f219ca6880cd08780cbde94a6e3e80db68737f3a366.scope: Deactivated successfully.
Dec 13 02:16:09 np0005558317 podman[97610]: 2025-12-13 07:16:09.647848398 +0000 UTC m=+0.095512445 container died 443c021b8be5369bd1c86f219ca6880cd08780cbde94a6e3e80db68737f3a366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Dec 13 02:16:09 np0005558317 podman[97610]: 2025-12-13 07:16:09.569240854 +0000 UTC m=+0.016904910 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:16:09 np0005558317 podman[97610]: 2025-12-13 07:16:09.668070319 +0000 UTC m=+0.115734365 container remove 443c021b8be5369bd1c86f219ca6880cd08780cbde94a6e3e80db68737f3a366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jennings, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:16:09 np0005558317 systemd[1]: var-lib-containers-storage-overlay-5d8afbe030691f5c12daadafa01d59e7fccd21b7c1fe5b9dee7bb795ddf9f55d-merged.mount: Deactivated successfully.
Dec 13 02:16:09 np0005558317 systemd[1]: libpod-conmon-443c021b8be5369bd1c86f219ca6880cd08780cbde94a6e3e80db68737f3a366.scope: Deactivated successfully.
Dec 13 02:16:09 np0005558317 podman[97645]: 2025-12-13 07:16:09.777812061 +0000 UTC m=+0.026874582 container create 548f6ff34e815117b826f7874b1ec17518d16914a4e670c7cc1c93f718e6d031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_pike, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 02:16:09 np0005558317 systemd[1]: Started libpod-conmon-548f6ff34e815117b826f7874b1ec17518d16914a4e670c7cc1c93f718e6d031.scope.
Dec 13 02:16:09 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:16:09 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c2b4e1e85047b8940082097d278825b85c393676363fdf968634a460b2e8dae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:16:09 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c2b4e1e85047b8940082097d278825b85c393676363fdf968634a460b2e8dae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:16:09 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c2b4e1e85047b8940082097d278825b85c393676363fdf968634a460b2e8dae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:16:09 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c2b4e1e85047b8940082097d278825b85c393676363fdf968634a460b2e8dae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:16:09 np0005558317 podman[97645]: 2025-12-13 07:16:09.836369065 +0000 UTC m=+0.085431585 container init 548f6ff34e815117b826f7874b1ec17518d16914a4e670c7cc1c93f718e6d031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 02:16:09 np0005558317 podman[97645]: 2025-12-13 07:16:09.841546551 +0000 UTC m=+0.090609071 container start 548f6ff34e815117b826f7874b1ec17518d16914a4e670c7cc1c93f718e6d031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_pike, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:16:09 np0005558317 podman[97645]: 2025-12-13 07:16:09.842748543 +0000 UTC m=+0.091811063 container attach 548f6ff34e815117b826f7874b1ec17518d16914a4e670c7cc1c93f718e6d031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_pike, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:16:09 np0005558317 podman[97645]: 2025-12-13 07:16:09.767352583 +0000 UTC m=+0.016415123 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:16:10 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v126: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:16:10 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Dec 13 02:16:10 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec 13 02:16:10 np0005558317 lvm[97736]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:16:10 np0005558317 lvm[97737]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:16:10 np0005558317 lvm[97737]: VG ceph_vg1 finished
Dec 13 02:16:10 np0005558317 lvm[97736]: VG ceph_vg0 finished
Dec 13 02:16:10 np0005558317 lvm[97740]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:16:10 np0005558317 lvm[97740]: VG ceph_vg2 finished
Dec 13 02:16:10 np0005558317 awesome_pike[97659]: {}
Dec 13 02:16:10 np0005558317 lvm[97743]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:16:10 np0005558317 lvm[97743]: VG ceph_vg0 finished
Dec 13 02:16:10 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Dec 13 02:16:10 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec 13 02:16:10 np0005558317 podman[97645]: 2025-12-13 07:16:10.483277621 +0000 UTC m=+0.732340141 container died 548f6ff34e815117b826f7874b1ec17518d16914a4e670c7cc1c93f718e6d031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:16:10 np0005558317 systemd[1]: libpod-548f6ff34e815117b826f7874b1ec17518d16914a4e670c7cc1c93f718e6d031.scope: Deactivated successfully.
Dec 13 02:16:10 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 13 02:16:10 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Dec 13 02:16:10 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Dec 13 02:16:10 np0005558317 systemd[1]: var-lib-containers-storage-overlay-0c2b4e1e85047b8940082097d278825b85c393676363fdf968634a460b2e8dae-merged.mount: Deactivated successfully.
Dec 13 02:16:10 np0005558317 podman[97645]: 2025-12-13 07:16:10.50783982 +0000 UTC m=+0.756902339 container remove 548f6ff34e815117b826f7874b1ec17518d16914a4e670c7cc1c93f718e6d031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_pike, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:16:10 np0005558317 systemd[1]: libpod-conmon-548f6ff34e815117b826f7874b1ec17518d16914a4e670c7cc1c93f718e6d031.scope: Deactivated successfully.
Dec 13 02:16:10 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:16:10 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:16:10 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:16:10 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:16:11 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 13 02:16:11 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:16:11 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:16:12 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v128: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:16:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Dec 13 02:16:12 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Dec 13 02:16:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Dec 13 02:16:12 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Dec 13 02:16:12 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 13 02:16:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Dec 13 02:16:12 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Dec 13 02:16:12 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887648582s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 active pruub 125.730354309s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:12 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887540817s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730354309s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:12 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887639046s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 active pruub 125.730476379s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:12 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887612343s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730476379s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:12 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887332916s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 active pruub 125.730545044s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:12 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887315750s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730545044s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:12 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887020111s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 active pruub 125.730529785s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:12 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887008667s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730529785s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:12 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:12 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:12 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:12 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:16:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Dec 13 02:16:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Dec 13 02:16:12 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Dec 13 02:16:12 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:12 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:12 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:12 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:12 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:12 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:12 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:12 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:12 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:12 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:12 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:12 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:12 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:12 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:12 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:12 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:13 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec 13 02:16:13 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec 13 02:16:13 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Dec 13 02:16:13 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Dec 13 02:16:13 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 13 02:16:13 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Dec 13 02:16:13 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Dec 13 02:16:13 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Dec 13 02:16:13 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:13 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:13 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:13 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:14 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v132: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:16:14 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Dec 13 02:16:14 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Dec 13 02:16:14 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Dec 13 02:16:14 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Dec 13 02:16:14 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Dec 13 02:16:14 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Dec 13 02:16:14 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 13 02:16:14 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Dec 13 02:16:14 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Dec 13 02:16:14 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997287750s) [2] async=[2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 active pruub 134.053726196s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:14 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997888565s) [2] async=[2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 active pruub 134.054489136s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:14 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997838974s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054489136s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:14 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997774124s) [2] async=[2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 active pruub 134.054504395s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:14 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997743607s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054504395s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:14 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.998094559s) [2] async=[2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 active pruub 134.055038452s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:14 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743862152s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 active pruub 131.949691772s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:14 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743840218s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY pruub 131.949691772s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:14 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743666649s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 active pruub 131.949722290s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:14 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743650436s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY pruub 131.949722290s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:14 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743508339s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 active pruub 131.949981689s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:14 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743436813s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY pruub 131.949981689s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:14 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.744418144s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=43'551 active pruub 132.951171875s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:14 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.744332314s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=43'551 unknown NOTIFY pruub 132.951171875s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:14 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.998066902s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.055038452s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:14 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.994911194s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.053726196s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:14 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:14 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:14 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:14 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:14 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:14 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:14 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:14 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:14 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:14 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:14 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:14 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:15 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 13 02:16:15 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Dec 13 02:16:15 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Dec 13 02:16:15 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Dec 13 02:16:15 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:15 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:15 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:15 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:15 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:15 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:15 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:15 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:15 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:15 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:15 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:15 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:15 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:15 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:15 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:15 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:15 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:15 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:15 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:15 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:16 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v135: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:16:16 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Dec 13 02:16:16 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Dec 13 02:16:16 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec 13 02:16:16 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec 13 02:16:16 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Dec 13 02:16:16 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Dec 13 02:16:16 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Dec 13 02:16:16 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Dec 13 02:16:16 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 13 02:16:16 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Dec 13 02:16:16 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Dec 13 02:16:16 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:16 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:16 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:16 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:17 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Dec 13 02:16:17 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Dec 13 02:16:17 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.f scrub starts
Dec 13 02:16:17 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.f scrub ok
Dec 13 02:16:17 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.774515152s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 active pruub 133.730743408s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:17 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.774483681s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730743408s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:17 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.774012566s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 active pruub 133.730667114s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:17 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.773996353s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730667114s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:17 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:17 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:16:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Dec 13 02:16:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Dec 13 02:16:17 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:17 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:17 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:17 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:17 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Dec 13 02:16:17 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.012885094s) [2] async=[2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 active pruub 140.214294434s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:17 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.012840271s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY pruub 140.214294434s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:17 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:17 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:17 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:17 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:17 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:17 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:17 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.011977196s) [2] async=[2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 active pruub 140.214172363s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:17 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.011938095s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY pruub 140.214172363s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:17 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:17 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:17 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.011684418s) [2] async=[2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 active pruub 140.214202881s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:17 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:17 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:17 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:17 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:17 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.011443138s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY pruub 140.214202881s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:17 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=15.010023117s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=43'551 active pruub 140.213317871s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:17 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=15.009953499s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=43'551 unknown NOTIFY pruub 140.213317871s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:17 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 13 02:16:18 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v138: 321 pgs: 4 active+remapped, 317 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 290 B/s, 5 objects/s recovering
Dec 13 02:16:18 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Dec 13 02:16:18 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Dec 13 02:16:18 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Dec 13 02:16:18 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 13 02:16:18 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Dec 13 02:16:18 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Dec 13 02:16:18 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:18 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:18 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:18 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:18 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:18 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:18 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Dec 13 02:16:18 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 13 02:16:19 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Dec 13 02:16:19 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Dec 13 02:16:19 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Dec 13 02:16:19 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Dec 13 02:16:19 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Dec 13 02:16:19 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.981441498s) [2] async=[2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 active pruub 139.056365967s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:19 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.981397629s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056365967s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:19 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.981289864s) [2] async=[2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 active pruub 139.056335449s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:19 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.980957031s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056335449s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:19 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:19 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:19 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:19 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:20 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v141: 321 pgs: 4 peering, 317 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 498 B/s, 11 objects/s recovering
Dec 13 02:16:20 np0005558317 systemd-logind[745]: New session 34 of user zuul.
Dec 13 02:16:20 np0005558317 systemd[1]: Started Session 34 of User zuul.
Dec 13 02:16:20 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Dec 13 02:16:20 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Dec 13 02:16:20 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Dec 13 02:16:20 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=69/70 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:20 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=69/70 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:21 np0005558317 python3.9[97931]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:16:22 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v143: 321 pgs: 4 peering, 317 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 183 B/s, 5 objects/s recovering
Dec 13 02:16:22 np0005558317 python3.9[98149]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:16:22 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Dec 13 02:16:22 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Dec 13 02:16:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:16:22 np0005558317 systemd[1]: session-34.scope: Deactivated successfully.
Dec 13 02:16:22 np0005558317 systemd[1]: session-34.scope: Consumed 1.377s CPU time.
Dec 13 02:16:22 np0005558317 systemd-logind[745]: Session 34 logged out. Waiting for processes to exit.
Dec 13 02:16:22 np0005558317 systemd-logind[745]: Removed session 34.
Dec 13 02:16:23 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Dec 13 02:16:23 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Dec 13 02:16:23 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec 13 02:16:23 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec 13 02:16:24 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v144: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 226 B/s, 6 objects/s recovering
Dec 13 02:16:24 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Dec 13 02:16:24 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec 13 02:16:24 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Dec 13 02:16:24 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Dec 13 02:16:24 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Dec 13 02:16:24 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Dec 13 02:16:24 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Dec 13 02:16:24 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 13 02:16:24 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Dec 13 02:16:24 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Dec 13 02:16:24 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec 13 02:16:25 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Dec 13 02:16:25 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Dec 13 02:16:25 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec 13 02:16:25 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec 13 02:16:25 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 13 02:16:26 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v146: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 81 B/s, 2 objects/s recovering
Dec 13 02:16:26 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Dec 13 02:16:26 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Dec 13 02:16:26 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec 13 02:16:26 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec 13 02:16:26 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec 13 02:16:26 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec 13 02:16:26 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Dec 13 02:16:26 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 13 02:16:26 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Dec 13 02:16:26 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Dec 13 02:16:26 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Dec 13 02:16:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:16:27 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 13 02:16:28 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v148: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 70 B/s, 1 objects/s recovering
Dec 13 02:16:28 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Dec 13 02:16:28 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Dec 13 02:16:28 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Dec 13 02:16:28 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Dec 13 02:16:28 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Dec 13 02:16:28 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Dec 13 02:16:28 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 13 02:16:28 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Dec 13 02:16:28 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Dec 13 02:16:28 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499458313s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 active pruub 141.730636597s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:28 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499427795s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730636597s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:28 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499320030s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 active pruub 141.730758667s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:28 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:28 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499304771s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730758667s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:28 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:29 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Dec 13 02:16:29 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 13 02:16:29 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Dec 13 02:16:29 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Dec 13 02:16:29 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:29 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:29 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:29 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:30 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v151: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:16:30 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Dec 13 02:16:30 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Dec 13 02:16:30 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Dec 13 02:16:30 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Dec 13 02:16:30 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Dec 13 02:16:30 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Dec 13 02:16:30 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 13 02:16:30 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Dec 13 02:16:30 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Dec 13 02:16:30 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:30 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:31 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec 13 02:16:31 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec 13 02:16:31 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Dec 13 02:16:31 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 13 02:16:31 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Dec 13 02:16:31 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Dec 13 02:16:31 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031506538s) [2] async=[2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 active pruub 151.159759521s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:31 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031457901s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:31 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031195641s) [2] async=[2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 active pruub 151.159759521s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:31 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031086922s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:31 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:31 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:31 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:31 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:32 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v154: 321 pgs: 2 remapped+peering, 319 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:16:32 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Dec 13 02:16:32 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Dec 13 02:16:32 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec 13 02:16:32 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec 13 02:16:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:16:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Dec 13 02:16:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Dec 13 02:16:32 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Dec 13 02:16:32 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:32 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:33 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Dec 13 02:16:33 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Dec 13 02:16:34 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v156: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 93 B/s, 3 objects/s recovering
Dec 13 02:16:34 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Dec 13 02:16:34 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Dec 13 02:16:34 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Dec 13 02:16:34 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Dec 13 02:16:34 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 13 02:16:34 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Dec 13 02:16:34 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Dec 13 02:16:35 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec 13 02:16:35 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec 13 02:16:35 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 13 02:16:36 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v158: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 76 B/s, 2 objects/s recovering
Dec 13 02:16:36 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Dec 13 02:16:36 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Dec 13 02:16:36 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Dec 13 02:16:36 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Dec 13 02:16:36 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Dec 13 02:16:36 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 13 02:16:36 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Dec 13 02:16:36 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Dec 13 02:16:36 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Dec 13 02:16:37 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Dec 13 02:16:37 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Dec 13 02:16:37 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Dec 13 02:16:37 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Dec 13 02:16:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:16:37 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 13 02:16:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:16:38
Dec 13 02:16:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:16:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 02:16:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'volumes', '.mgr', 'default.rgw.control', 'backups', 'default.rgw.log']
Dec 13 02:16:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:16:38 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v160: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Dec 13 02:16:38 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Dec 13 02:16:38 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Dec 13 02:16:38 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Dec 13 02:16:38 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Dec 13 02:16:38 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Dec 13 02:16:38 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Dec 13 02:16:38 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 13 02:16:38 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Dec 13 02:16:38 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Dec 13 02:16:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:16:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:16:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:16:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:16:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:16:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:16:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:16:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:16:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:16:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:16:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:16:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:16:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:16:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:16:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:16:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:16:39 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec 13 02:16:39 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec 13 02:16:39 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec 13 02:16:39 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec 13 02:16:39 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 13 02:16:40 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v162: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:16:40 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Dec 13 02:16:40 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Dec 13 02:16:40 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Dec 13 02:16:40 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Dec 13 02:16:40 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Dec 13 02:16:40 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 13 02:16:40 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Dec 13 02:16:40 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Dec 13 02:16:40 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Dec 13 02:16:41 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Dec 13 02:16:41 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Dec 13 02:16:41 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 13 02:16:42 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v164: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:16:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Dec 13 02:16:42 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Dec 13 02:16:42 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Dec 13 02:16:42 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Dec 13 02:16:42 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Dec 13 02:16:42 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Dec 13 02:16:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:16:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Dec 13 02:16:42 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Dec 13 02:16:42 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 13 02:16:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Dec 13 02:16:42 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Dec 13 02:16:43 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.a scrub starts
Dec 13 02:16:43 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.a scrub ok
Dec 13 02:16:43 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 13 02:16:44 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v166: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:16:44 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Dec 13 02:16:44 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Dec 13 02:16:44 np0005558317 systemd-logind[745]: New session 35 of user zuul.
Dec 13 02:16:44 np0005558317 systemd[1]: Started Session 35 of User zuul.
Dec 13 02:16:44 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Dec 13 02:16:44 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Dec 13 02:16:44 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Dec 13 02:16:44 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Dec 13 02:16:44 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 13 02:16:44 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Dec 13 02:16:44 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Dec 13 02:16:44 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 83 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83 pruub=11.542116165s) [2] r=-1 lpr=83 pi=[54,83)/1 crt=43'551 active pruub 163.950912476s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:44 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 83 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83 pruub=11.541775703s) [2] r=-1 lpr=83 pi=[54,83)/1 crt=43'551 unknown NOTIFY pruub 163.950912476s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:44 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:45 np0005558317 python3.9[98333]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:16:45 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec 13 02:16:45 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec 13 02:16:45 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.e scrub starts
Dec 13 02:16:45 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.e scrub ok
Dec 13 02:16:45 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Dec 13 02:16:45 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Dec 13 02:16:45 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Dec 13 02:16:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 84 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:45 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 84 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:45 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 13 02:16:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:45 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:46 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v169: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:16:46 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Dec 13 02:16:46 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Dec 13 02:16:46 np0005558317 python3.9[98551]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:16:46 np0005558317 systemd[1]: session-35.scope: Deactivated successfully.
Dec 13 02:16:46 np0005558317 systemd[1]: session-35.scope: Consumed 1.381s CPU time.
Dec 13 02:16:46 np0005558317 systemd-logind[745]: Session 35 logged out. Waiting for processes to exit.
Dec 13 02:16:46 np0005558317 systemd-logind[745]: Removed session 35.
Dec 13 02:16:46 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Dec 13 02:16:46 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 13 02:16:46 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Dec 13 02:16:46 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Dec 13 02:16:46 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 85 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:46 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Dec 13 02:16:46 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 13 02:16:47 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Dec 13 02:16:47 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Dec 13 02:16:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:16:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Dec 13 02:16:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Dec 13 02:16:47 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:47 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:47 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Dec 13 02:16:47 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86 pruub=15.131115913s) [2] async=[2] r=-1 lpr=86 pi=[54,86)/1 crt=43'551 active pruub 170.337738037s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:47 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86 pruub=15.131011963s) [2] r=-1 lpr=86 pi=[54,86)/1 crt=43'551 unknown NOTIFY pruub 170.337738037s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:48 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v172: 321 pgs: 1 active+remapped, 320 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Dec 13 02:16:48 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Dec 13 02:16:48 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Dec 13 02:16:48 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Dec 13 02:16:48 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Dec 13 02:16:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:16:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:16:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:16:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:16:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:16:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:16:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:16:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:16:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:16:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:16:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:16:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:16:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.541771007094942e-07 of space, bias 4.0, pg target 0.0009050125208513931 quantized to 16 (current 32)
Dec 13 02:16:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:16:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:16:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:16:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:16:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:16:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:16:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:16:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:16:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:16:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:16:48 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.d scrub starts
Dec 13 02:16:48 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.d scrub ok
Dec 13 02:16:48 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Dec 13 02:16:48 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 13 02:16:48 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Dec 13 02:16:48 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 87 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87 pruub=15.744177818s) [1] r=-1 lpr=87 pi=[54,87)/1 crt=43'551 active pruub 171.950607300s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:48 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 87 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87 pruub=15.743911743s) [1] r=-1 lpr=87 pi=[54,87)/1 crt=43'551 unknown NOTIFY pruub 171.950607300s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:48 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Dec 13 02:16:48 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:48 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=86/87 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:48 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Dec 13 02:16:48 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 13 02:16:49 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Dec 13 02:16:49 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Dec 13 02:16:49 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 88 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:49 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 88 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:49 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Dec 13 02:16:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:49 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:50 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v175: 321 pgs: 1 peering, 320 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Dec 13 02:16:50 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Dec 13 02:16:50 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Dec 13 02:16:50 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Dec 13 02:16:50 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 89 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:51 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Dec 13 02:16:51 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Dec 13 02:16:51 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec 13 02:16:51 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec 13 02:16:51 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Dec 13 02:16:51 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Dec 13 02:16:51 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90 pruub=14.988631248s) [1] async=[1] r=-1 lpr=90 pi=[54,90)/1 crt=43'551 active pruub 174.218078613s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:51 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90 pruub=14.988577843s) [1] r=-1 lpr=90 pi=[54,90)/1 crt=43'551 unknown NOTIFY pruub 174.218078613s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:51 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:51 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Dec 13 02:16:52 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v178: 321 pgs: 1 peering, 320 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:16:52 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Dec 13 02:16:52 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Dec 13 02:16:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:16:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Dec 13 02:16:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Dec 13 02:16:52 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Dec 13 02:16:52 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=90/91 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:53 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.a scrub starts
Dec 13 02:16:53 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.a scrub ok
Dec 13 02:16:53 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec 13 02:16:53 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec 13 02:16:54 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.f scrub starts
Dec 13 02:16:54 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.f scrub ok
Dec 13 02:16:54 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v180: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:16:54 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Dec 13 02:16:54 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Dec 13 02:16:54 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Dec 13 02:16:54 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 13 02:16:54 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Dec 13 02:16:54 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Dec 13 02:16:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=8.970234871s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=43'551 active pruub 165.223297119s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:54 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=8.970201492s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=43'551 unknown NOTIFY pruub 165.223297119s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:54 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:54 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Dec 13 02:16:55 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Dec 13 02:16:55 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Dec 13 02:16:55 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Dec 13 02:16:55 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Dec 13 02:16:55 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Dec 13 02:16:55 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:55 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:55 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:55 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:55 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 13 02:16:56 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v183: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 0 objects/s recovering
Dec 13 02:16:56 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Dec 13 02:16:56 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Dec 13 02:16:56 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec 13 02:16:56 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec 13 02:16:56 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Dec 13 02:16:56 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 13 02:16:56 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Dec 13 02:16:56 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Dec 13 02:16:56 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Dec 13 02:16:56 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:57 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Dec 13 02:16:57 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Dec 13 02:16:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:16:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Dec 13 02:16:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Dec 13 02:16:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:57 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:16:57 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Dec 13 02:16:57 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=15.050017357s) [0] async=[0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 active pruub 174.272872925s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:16:57 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=15.049875259s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY pruub 174.272872925s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:16:57 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 13 02:16:58 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Dec 13 02:16:58 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Dec 13 02:16:58 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v186: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Dec 13 02:16:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Dec 13 02:16:58 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Dec 13 02:16:58 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Dec 13 02:16:58 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Dec 13 02:16:58 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec 13 02:16:58 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec 13 02:16:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Dec 13 02:16:58 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 13 02:16:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Dec 13 02:16:58 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Dec 13 02:16:58 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=95/96 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:16:58 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Dec 13 02:16:58 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 13 02:16:59 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec 13 02:16:59 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec 13 02:16:59 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Dec 13 02:16:59 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Dec 13 02:17:00 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v188: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:17:00 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Dec 13 02:17:00 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Dec 13 02:17:00 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec 13 02:17:00 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec 13 02:17:00 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec 13 02:17:00 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec 13 02:17:00 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Dec 13 02:17:00 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Dec 13 02:17:00 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 13 02:17:00 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Dec 13 02:17:00 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Dec 13 02:17:00 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 97 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97 pruub=11.676051140s) [2] r=-1 lpr=97 pi=[54,97)/1 crt=43'551 active pruub 179.950790405s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:17:00 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 97 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97 pruub=11.676016808s) [2] r=-1 lpr=97 pi=[54,97)/1 crt=43'551 unknown NOTIFY pruub 179.950790405s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:17:00 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:17:01 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Dec 13 02:17:01 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Dec 13 02:17:01 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Dec 13 02:17:01 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Dec 13 02:17:01 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Dec 13 02:17:01 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 13 02:17:01 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:17:01 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:17:01 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 98 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:17:01 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 98 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:17:02 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v191: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:17:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Dec 13 02:17:02 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Dec 13 02:17:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:17:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Dec 13 02:17:02 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 13 02:17:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Dec 13 02:17:02 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Dec 13 02:17:02 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Dec 13 02:17:03 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.a scrub starts
Dec 13 02:17:03 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 99 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] async=[2] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:17:03 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.a scrub ok
Dec 13 02:17:03 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Dec 13 02:17:03 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 13 02:17:03 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Dec 13 02:17:03 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Dec 13 02:17:03 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100 pruub=15.603092194s) [2] async=[2] r=-1 lpr=100 pi=[54,100)/1 crt=43'551 active pruub 186.894073486s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:17:03 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100 pruub=15.603034973s) [2] r=-1 lpr=100 pi=[54,100)/1 crt=43'551 unknown NOTIFY pruub 186.894073486s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:17:03 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:17:03 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:17:04 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v194: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Dec 13 02:17:04 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.e scrub starts
Dec 13 02:17:04 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.e scrub ok
Dec 13 02:17:04 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.17 scrub starts
Dec 13 02:17:04 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.17 scrub ok
Dec 13 02:17:04 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Dec 13 02:17:04 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Dec 13 02:17:04 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Dec 13 02:17:04 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=100/101 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:17:06 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v196: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 0 objects/s recovering
Dec 13 02:17:07 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec 13 02:17:07 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec 13 02:17:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:17:08 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec 13 02:17:08 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec 13 02:17:08 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v197: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Dec 13 02:17:09 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Dec 13 02:17:09 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Dec 13 02:17:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:17:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:17:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:17:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:17:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:17:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:17:10 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v198: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 55 B/s, 1 objects/s recovering
Dec 13 02:17:10 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Dec 13 02:17:10 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Dec 13 02:17:10 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Dec 13 02:17:10 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 13 02:17:10 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Dec 13 02:17:10 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Dec 13 02:17:10 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Dec 13 02:17:11 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:17:11 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:17:11 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:17:11 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:17:11 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:17:11 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:17:11 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:17:11 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:17:11 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:17:11 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:17:11 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:17:11 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:17:11 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 13 02:17:11 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:17:11 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:17:11 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:17:11 np0005558317 podman[98722]: 2025-12-13 07:17:11.376137066 +0000 UTC m=+0.026139257 container create 32133b2d40986e3e7b4e89aea1dca17a273ecd891a4d624de6e3c79c3f726090 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 02:17:11 np0005558317 systemd[1]: Started libpod-conmon-32133b2d40986e3e7b4e89aea1dca17a273ecd891a4d624de6e3c79c3f726090.scope.
Dec 13 02:17:11 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:17:11 np0005558317 podman[98722]: 2025-12-13 07:17:11.428853712 +0000 UTC m=+0.078855913 container init 32133b2d40986e3e7b4e89aea1dca17a273ecd891a4d624de6e3c79c3f726090 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_fermat, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 02:17:11 np0005558317 podman[98722]: 2025-12-13 07:17:11.433915229 +0000 UTC m=+0.083917410 container start 32133b2d40986e3e7b4e89aea1dca17a273ecd891a4d624de6e3c79c3f726090 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_fermat, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:17:11 np0005558317 podman[98722]: 2025-12-13 07:17:11.434828466 +0000 UTC m=+0.084830666 container attach 32133b2d40986e3e7b4e89aea1dca17a273ecd891a4d624de6e3c79c3f726090 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_fermat, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:17:11 np0005558317 priceless_fermat[98735]: 167 167
Dec 13 02:17:11 np0005558317 systemd[1]: libpod-32133b2d40986e3e7b4e89aea1dca17a273ecd891a4d624de6e3c79c3f726090.scope: Deactivated successfully.
Dec 13 02:17:11 np0005558317 conmon[98735]: conmon 32133b2d40986e3e7b4e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-32133b2d40986e3e7b4e89aea1dca17a273ecd891a4d624de6e3c79c3f726090.scope/container/memory.events
Dec 13 02:17:11 np0005558317 podman[98722]: 2025-12-13 07:17:11.438294141 +0000 UTC m=+0.088296323 container died 32133b2d40986e3e7b4e89aea1dca17a273ecd891a4d624de6e3c79c3f726090 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_fermat, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 02:17:11 np0005558317 systemd[1]: var-lib-containers-storage-overlay-eb37cb0d0af3468a544873094f47f0cb4f1b5ee20c3aba764bfab891467521d2-merged.mount: Deactivated successfully.
Dec 13 02:17:11 np0005558317 podman[98722]: 2025-12-13 07:17:11.457682506 +0000 UTC m=+0.107684686 container remove 32133b2d40986e3e7b4e89aea1dca17a273ecd891a4d624de6e3c79c3f726090 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:17:11 np0005558317 podman[98722]: 2025-12-13 07:17:11.365044109 +0000 UTC m=+0.015046300 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:17:11 np0005558317 systemd[1]: libpod-conmon-32133b2d40986e3e7b4e89aea1dca17a273ecd891a4d624de6e3c79c3f726090.scope: Deactivated successfully.
Dec 13 02:17:11 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Dec 13 02:17:11 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Dec 13 02:17:11 np0005558317 podman[98757]: 2025-12-13 07:17:11.567781881 +0000 UTC m=+0.026318825 container create 3b8e1449480c6725e3b9d2f87f114024d71000986b1483dc7a47a123c03ca830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 02:17:11 np0005558317 systemd[1]: Started libpod-conmon-3b8e1449480c6725e3b9d2f87f114024d71000986b1483dc7a47a123c03ca830.scope.
Dec 13 02:17:11 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:17:11 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3d609945e0dabe93181a9073fb9749b51368eef93287282f51163d9b4b375b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:17:11 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3d609945e0dabe93181a9073fb9749b51368eef93287282f51163d9b4b375b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:17:11 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3d609945e0dabe93181a9073fb9749b51368eef93287282f51163d9b4b375b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:17:11 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3d609945e0dabe93181a9073fb9749b51368eef93287282f51163d9b4b375b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:17:11 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3d609945e0dabe93181a9073fb9749b51368eef93287282f51163d9b4b375b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:17:11 np0005558317 podman[98757]: 2025-12-13 07:17:11.623649321 +0000 UTC m=+0.082186256 container init 3b8e1449480c6725e3b9d2f87f114024d71000986b1483dc7a47a123c03ca830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermi, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 02:17:11 np0005558317 podman[98757]: 2025-12-13 07:17:11.629633903 +0000 UTC m=+0.088170837 container start 3b8e1449480c6725e3b9d2f87f114024d71000986b1483dc7a47a123c03ca830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 02:17:11 np0005558317 podman[98757]: 2025-12-13 07:17:11.630771963 +0000 UTC m=+0.089308897 container attach 3b8e1449480c6725e3b9d2f87f114024d71000986b1483dc7a47a123c03ca830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:17:11 np0005558317 podman[98757]: 2025-12-13 07:17:11.557571233 +0000 UTC m=+0.016108187 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:17:11 np0005558317 cool_fermi[98770]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:17:11 np0005558317 cool_fermi[98770]: --> All data devices are unavailable
Dec 13 02:17:11 np0005558317 systemd[1]: libpod-3b8e1449480c6725e3b9d2f87f114024d71000986b1483dc7a47a123c03ca830.scope: Deactivated successfully.
Dec 13 02:17:11 np0005558317 conmon[98770]: conmon 3b8e1449480c6725e3b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3b8e1449480c6725e3b9d2f87f114024d71000986b1483dc7a47a123c03ca830.scope/container/memory.events
Dec 13 02:17:11 np0005558317 podman[98757]: 2025-12-13 07:17:11.983226508 +0000 UTC m=+0.441763442 container died 3b8e1449480c6725e3b9d2f87f114024d71000986b1483dc7a47a123c03ca830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:17:11 np0005558317 systemd[1]: var-lib-containers-storage-overlay-dd3d609945e0dabe93181a9073fb9749b51368eef93287282f51163d9b4b375b-merged.mount: Deactivated successfully.
Dec 13 02:17:12 np0005558317 podman[98757]: 2025-12-13 07:17:12.003084435 +0000 UTC m=+0.461621368 container remove 3b8e1449480c6725e3b9d2f87f114024d71000986b1483dc7a47a123c03ca830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:17:12 np0005558317 systemd[1]: libpod-conmon-3b8e1449480c6725e3b9d2f87f114024d71000986b1483dc7a47a123c03ca830.scope: Deactivated successfully.
Dec 13 02:17:12 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Dec 13 02:17:12 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Dec 13 02:17:12 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v200: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 51 B/s, 1 objects/s recovering
Dec 13 02:17:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Dec 13 02:17:12 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Dec 13 02:17:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Dec 13 02:17:12 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 13 02:17:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Dec 13 02:17:12 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Dec 13 02:17:12 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Dec 13 02:17:12 np0005558317 podman[98861]: 2025-12-13 07:17:12.338398693 +0000 UTC m=+0.027200332 container create 721d6069759e19019431bff10a600465c483c373417234e93963ebe5b4d21015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:17:12 np0005558317 systemd[1]: Started libpod-conmon-721d6069759e19019431bff10a600465c483c373417234e93963ebe5b4d21015.scope.
Dec 13 02:17:12 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:17:12 np0005558317 podman[98861]: 2025-12-13 07:17:12.376804351 +0000 UTC m=+0.065606010 container init 721d6069759e19019431bff10a600465c483c373417234e93963ebe5b4d21015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_cray, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:17:12 np0005558317 podman[98861]: 2025-12-13 07:17:12.381220505 +0000 UTC m=+0.070022144 container start 721d6069759e19019431bff10a600465c483c373417234e93963ebe5b4d21015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_cray, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:17:12 np0005558317 podman[98861]: 2025-12-13 07:17:12.382672966 +0000 UTC m=+0.071474605 container attach 721d6069759e19019431bff10a600465c483c373417234e93963ebe5b4d21015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_cray, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:17:12 np0005558317 practical_cray[98875]: 167 167
Dec 13 02:17:12 np0005558317 systemd[1]: libpod-721d6069759e19019431bff10a600465c483c373417234e93963ebe5b4d21015.scope: Deactivated successfully.
Dec 13 02:17:12 np0005558317 podman[98861]: 2025-12-13 07:17:12.384546419 +0000 UTC m=+0.073348058 container died 721d6069759e19019431bff10a600465c483c373417234e93963ebe5b4d21015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:17:12 np0005558317 systemd[1]: var-lib-containers-storage-overlay-de55810d77bccad7ebf53b9bea3abde3afe6a3d4ad7186527f67c49b505ebcc6-merged.mount: Deactivated successfully.
Dec 13 02:17:12 np0005558317 podman[98861]: 2025-12-13 07:17:12.403290861 +0000 UTC m=+0.092092500 container remove 721d6069759e19019431bff10a600465c483c373417234e93963ebe5b4d21015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:17:12 np0005558317 podman[98861]: 2025-12-13 07:17:12.328684469 +0000 UTC m=+0.017486128 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:17:12 np0005558317 systemd[1]: libpod-conmon-721d6069759e19019431bff10a600465c483c373417234e93963ebe5b4d21015.scope: Deactivated successfully.
Dec 13 02:17:12 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.e scrub starts
Dec 13 02:17:12 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.e scrub ok
Dec 13 02:17:12 np0005558317 podman[98898]: 2025-12-13 07:17:12.517240806 +0000 UTC m=+0.029992542 container create 6a139bcc319cb51197f316e27764ad9ccec42a6ea33292fdf34fc2aff00feeec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wu, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:17:12 np0005558317 systemd[1]: Started libpod-conmon-6a139bcc319cb51197f316e27764ad9ccec42a6ea33292fdf34fc2aff00feeec.scope.
Dec 13 02:17:12 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:17:12 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87436af24b3a588628e95f8f214614f5784d8fc8e22d2115510ff1b75565949/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:17:12 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87436af24b3a588628e95f8f214614f5784d8fc8e22d2115510ff1b75565949/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:17:12 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87436af24b3a588628e95f8f214614f5784d8fc8e22d2115510ff1b75565949/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:17:12 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87436af24b3a588628e95f8f214614f5784d8fc8e22d2115510ff1b75565949/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:17:12 np0005558317 podman[98898]: 2025-12-13 07:17:12.573369438 +0000 UTC m=+0.086121203 container init 6a139bcc319cb51197f316e27764ad9ccec42a6ea33292fdf34fc2aff00feeec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wu, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 02:17:12 np0005558317 podman[98898]: 2025-12-13 07:17:12.578638665 +0000 UTC m=+0.091390411 container start 6a139bcc319cb51197f316e27764ad9ccec42a6ea33292fdf34fc2aff00feeec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wu, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:17:12 np0005558317 podman[98898]: 2025-12-13 07:17:12.579874228 +0000 UTC m=+0.092625965 container attach 6a139bcc319cb51197f316e27764ad9ccec42a6ea33292fdf34fc2aff00feeec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 02:17:12 np0005558317 podman[98898]: 2025-12-13 07:17:12.506245083 +0000 UTC m=+0.018996829 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:17:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:17:12 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Dec 13 02:17:12 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Dec 13 02:17:12 np0005558317 brave_wu[98911]: {
Dec 13 02:17:12 np0005558317 brave_wu[98911]:    "0": [
Dec 13 02:17:12 np0005558317 brave_wu[98911]:        {
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "devices": [
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "/dev/loop3"
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            ],
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "lv_name": "ceph_lv0",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "lv_size": "21470642176",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "name": "ceph_lv0",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "tags": {
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.cluster_name": "ceph",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.crush_device_class": "",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.encrypted": "0",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.objectstore": "bluestore",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.osd_id": "0",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.type": "block",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.vdo": "0",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.with_tpm": "0"
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            },
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "type": "block",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "vg_name": "ceph_vg0"
Dec 13 02:17:12 np0005558317 brave_wu[98911]:        }
Dec 13 02:17:12 np0005558317 brave_wu[98911]:    ],
Dec 13 02:17:12 np0005558317 brave_wu[98911]:    "1": [
Dec 13 02:17:12 np0005558317 brave_wu[98911]:        {
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "devices": [
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "/dev/loop4"
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            ],
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "lv_name": "ceph_lv1",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "lv_size": "21470642176",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "name": "ceph_lv1",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "tags": {
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.cluster_name": "ceph",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.crush_device_class": "",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.encrypted": "0",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.objectstore": "bluestore",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.osd_id": "1",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.type": "block",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.vdo": "0",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.with_tpm": "0"
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            },
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "type": "block",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "vg_name": "ceph_vg1"
Dec 13 02:17:12 np0005558317 brave_wu[98911]:        }
Dec 13 02:17:12 np0005558317 brave_wu[98911]:    ],
Dec 13 02:17:12 np0005558317 brave_wu[98911]:    "2": [
Dec 13 02:17:12 np0005558317 brave_wu[98911]:        {
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "devices": [
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "/dev/loop5"
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            ],
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "lv_name": "ceph_lv2",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "lv_size": "21470642176",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "name": "ceph_lv2",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "tags": {
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.cluster_name": "ceph",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.crush_device_class": "",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.encrypted": "0",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.objectstore": "bluestore",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.osd_id": "2",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.type": "block",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.vdo": "0",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:                "ceph.with_tpm": "0"
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            },
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "type": "block",
Dec 13 02:17:12 np0005558317 brave_wu[98911]:            "vg_name": "ceph_vg2"
Dec 13 02:17:12 np0005558317 brave_wu[98911]:        }
Dec 13 02:17:12 np0005558317 brave_wu[98911]:    ]
Dec 13 02:17:12 np0005558317 brave_wu[98911]: }
Dec 13 02:17:12 np0005558317 systemd[1]: libpod-6a139bcc319cb51197f316e27764ad9ccec42a6ea33292fdf34fc2aff00feeec.scope: Deactivated successfully.
Dec 13 02:17:12 np0005558317 podman[98898]: 2025-12-13 07:17:12.818921017 +0000 UTC m=+0.331672753 container died 6a139bcc319cb51197f316e27764ad9ccec42a6ea33292fdf34fc2aff00feeec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 02:17:12 np0005558317 systemd[1]: var-lib-containers-storage-overlay-a87436af24b3a588628e95f8f214614f5784d8fc8e22d2115510ff1b75565949-merged.mount: Deactivated successfully.
Dec 13 02:17:12 np0005558317 podman[98898]: 2025-12-13 07:17:12.841909901 +0000 UTC m=+0.354661637 container remove 6a139bcc319cb51197f316e27764ad9ccec42a6ea33292fdf34fc2aff00feeec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:17:12 np0005558317 systemd[1]: libpod-conmon-6a139bcc319cb51197f316e27764ad9ccec42a6ea33292fdf34fc2aff00feeec.scope: Deactivated successfully.
Dec 13 02:17:13 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Dec 13 02:17:13 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Dec 13 02:17:13 np0005558317 podman[98990]: 2025-12-13 07:17:13.180459182 +0000 UTC m=+0.026249205 container create a4f680c881e6a2ed54c6a77993f3738bbd406929b7c277f74a9ce22df8005edb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_villani, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:17:13 np0005558317 systemd[1]: Started libpod-conmon-a4f680c881e6a2ed54c6a77993f3738bbd406929b7c277f74a9ce22df8005edb.scope.
Dec 13 02:17:13 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:17:13 np0005558317 podman[98990]: 2025-12-13 07:17:13.225046352 +0000 UTC m=+0.070836376 container init a4f680c881e6a2ed54c6a77993f3738bbd406929b7c277f74a9ce22df8005edb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 02:17:13 np0005558317 podman[98990]: 2025-12-13 07:17:13.22936882 +0000 UTC m=+0.075158832 container start a4f680c881e6a2ed54c6a77993f3738bbd406929b7c277f74a9ce22df8005edb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Dec 13 02:17:13 np0005558317 podman[98990]: 2025-12-13 07:17:13.230426057 +0000 UTC m=+0.076216071 container attach a4f680c881e6a2ed54c6a77993f3738bbd406929b7c277f74a9ce22df8005edb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_villani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 02:17:13 np0005558317 musing_villani[99003]: 167 167
Dec 13 02:17:13 np0005558317 systemd[1]: libpod-a4f680c881e6a2ed54c6a77993f3738bbd406929b7c277f74a9ce22df8005edb.scope: Deactivated successfully.
Dec 13 02:17:13 np0005558317 podman[98990]: 2025-12-13 07:17:13.233178142 +0000 UTC m=+0.078968155 container died a4f680c881e6a2ed54c6a77993f3738bbd406929b7c277f74a9ce22df8005edb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 02:17:13 np0005558317 systemd[1]: var-lib-containers-storage-overlay-1fd46416ec748a196ab7a4f1d585b16d150671b63be6524a8ecdb2b284fe7a2e-merged.mount: Deactivated successfully.
Dec 13 02:17:13 np0005558317 podman[98990]: 2025-12-13 07:17:13.250418656 +0000 UTC m=+0.096208670 container remove a4f680c881e6a2ed54c6a77993f3738bbd406929b7c277f74a9ce22df8005edb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_villani, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 02:17:13 np0005558317 podman[98990]: 2025-12-13 07:17:13.169844854 +0000 UTC m=+0.015634888 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:17:13 np0005558317 systemd[1]: libpod-conmon-a4f680c881e6a2ed54c6a77993f3738bbd406929b7c277f74a9ce22df8005edb.scope: Deactivated successfully.
Dec 13 02:17:13 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 13 02:17:13 np0005558317 podman[99024]: 2025-12-13 07:17:13.366768695 +0000 UTC m=+0.025683941 container create 3fdb03f95a345b4c3d7a6696a5c6dd8e5750794cf82a83723d3d8c79e647faba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:17:13 np0005558317 systemd[1]: Started libpod-conmon-3fdb03f95a345b4c3d7a6696a5c6dd8e5750794cf82a83723d3d8c79e647faba.scope.
Dec 13 02:17:13 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:17:13 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e8f0278a1c08fcc1b4bbc16dcdd80a51cc13411f0ee3108bb2bc24883ea81a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:17:13 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e8f0278a1c08fcc1b4bbc16dcdd80a51cc13411f0ee3108bb2bc24883ea81a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:17:13 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e8f0278a1c08fcc1b4bbc16dcdd80a51cc13411f0ee3108bb2bc24883ea81a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:17:13 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e8f0278a1c08fcc1b4bbc16dcdd80a51cc13411f0ee3108bb2bc24883ea81a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:17:13 np0005558317 podman[99024]: 2025-12-13 07:17:13.423397005 +0000 UTC m=+0.082312271 container init 3fdb03f95a345b4c3d7a6696a5c6dd8e5750794cf82a83723d3d8c79e647faba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:17:13 np0005558317 podman[99024]: 2025-12-13 07:17:13.428992797 +0000 UTC m=+0.087908043 container start 3fdb03f95a345b4c3d7a6696a5c6dd8e5750794cf82a83723d3d8c79e647faba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:17:13 np0005558317 podman[99024]: 2025-12-13 07:17:13.430047179 +0000 UTC m=+0.088962425 container attach 3fdb03f95a345b4c3d7a6696a5c6dd8e5750794cf82a83723d3d8c79e647faba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 02:17:13 np0005558317 podman[99024]: 2025-12-13 07:17:13.356611246 +0000 UTC m=+0.015526513 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:17:13 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec 13 02:17:13 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec 13 02:17:13 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103 pruub=15.175204277s) [0] r=-1 lpr=103 pi=[76,103)/1 crt=43'551 active pruub 190.297714233s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:17:13 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103 pruub=15.175173759s) [0] r=-1 lpr=103 pi=[76,103)/1 crt=43'551 unknown NOTIFY pruub 190.297714233s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:17:13 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=0 lpr=103 pi=[76,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:17:13 np0005558317 lvm[99112]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:17:13 np0005558317 lvm[99112]: VG ceph_vg0 finished
Dec 13 02:17:13 np0005558317 lvm[99115]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:17:13 np0005558317 lvm[99115]: VG ceph_vg1 finished
Dec 13 02:17:13 np0005558317 lvm[99118]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:17:13 np0005558317 lvm[99118]: VG ceph_vg2 finished
Dec 13 02:17:13 np0005558317 lvm[99119]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:17:13 np0005558317 lvm[99119]: VG ceph_vg1 finished
Dec 13 02:17:13 np0005558317 hardcore_euler[99037]: {}
Dec 13 02:17:14 np0005558317 systemd[1]: libpod-3fdb03f95a345b4c3d7a6696a5c6dd8e5750794cf82a83723d3d8c79e647faba.scope: Deactivated successfully.
Dec 13 02:17:14 np0005558317 podman[99024]: 2025-12-13 07:17:14.015306698 +0000 UTC m=+0.674221944 container died 3fdb03f95a345b4c3d7a6696a5c6dd8e5750794cf82a83723d3d8c79e647faba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 02:17:14 np0005558317 lvm[99121]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:17:14 np0005558317 lvm[99121]: VG ceph_vg1 finished
Dec 13 02:17:14 np0005558317 systemd[1]: var-lib-containers-storage-overlay-48e8f0278a1c08fcc1b4bbc16dcdd80a51cc13411f0ee3108bb2bc24883ea81a-merged.mount: Deactivated successfully.
Dec 13 02:17:14 np0005558317 podman[99024]: 2025-12-13 07:17:14.039671598 +0000 UTC m=+0.698586844 container remove 3fdb03f95a345b4c3d7a6696a5c6dd8e5750794cf82a83723d3d8c79e647faba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:17:14 np0005558317 systemd[1]: libpod-conmon-3fdb03f95a345b4c3d7a6696a5c6dd8e5750794cf82a83723d3d8c79e647faba.scope: Deactivated successfully.
Dec 13 02:17:14 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:17:14 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:17:14 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:17:14 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:17:14 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v202: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 51 B/s, 1 objects/s recovering
Dec 13 02:17:14 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Dec 13 02:17:14 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Dec 13 02:17:14 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Dec 13 02:17:14 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 13 02:17:14 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Dec 13 02:17:14 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Dec 13 02:17:14 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[76,104)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:17:14 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[76,104)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:17:14 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:17:14 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:17:14 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Dec 13 02:17:14 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:17:14 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:17:14 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec 13 02:17:14 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec 13 02:17:14 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Dec 13 02:17:14 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Dec 13 02:17:15 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Dec 13 02:17:15 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 13 02:17:15 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Dec 13 02:17:15 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Dec 13 02:17:15 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:17:16 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v205: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:17:16 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Dec 13 02:17:16 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Dec 13 02:17:16 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Dec 13 02:17:16 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Dec 13 02:17:16 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 13 02:17:16 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Dec 13 02:17:16 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:17:16 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:17:16 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106 pruub=15.640290260s) [0] async=[0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 active pruub 193.476989746s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:17:16 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106 pruub=11.385339737s) [0] r=-1 lpr=106 pi=[64,106)/1 crt=43'551 active pruub 189.222137451s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:17:16 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106 pruub=11.385320663s) [0] r=-1 lpr=106 pi=[64,106)/1 crt=43'551 unknown NOTIFY pruub 189.222137451s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:17:16 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=0 lpr=106 pi=[64,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:17:16 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Dec 13 02:17:16 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106 pruub=15.639621735s) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY pruub 193.476989746s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:17:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Dec 13 02:17:17 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 13 02:17:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Dec 13 02:17:17 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[64,107)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:17:17 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[64,107)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:17:17 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=106/107 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:17:17 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Dec 13 02:17:17 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:17:17 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:17:17 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec 13 02:17:17 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec 13 02:17:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:17:18 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v208: 321 pgs: 1 unknown, 1 peering, 319 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 66 B/s, 0 objects/s recovering
Dec 13 02:17:18 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Dec 13 02:17:18 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Dec 13 02:17:18 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Dec 13 02:17:18 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:17:19 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Dec 13 02:17:19 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Dec 13 02:17:19 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Dec 13 02:17:19 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:17:19 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:17:19 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109 pruub=15.559910774s) [0] async=[0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 active pruub 196.413848877s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:17:19 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109 pruub=15.559823990s) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY pruub 196.413848877s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:17:19 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec 13 02:17:19 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec 13 02:17:20 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v211: 321 pgs: 1 activating+remapped, 1 peering, 319 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 6/251 objects misplaced (2.390%); 104 B/s, 2 objects/s recovering
Dec 13 02:17:20 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Dec 13 02:17:20 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Dec 13 02:17:20 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Dec 13 02:17:20 np0005558317 ceph-osd[85140]: osd.0 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=109/110 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:17:20 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec 13 02:17:20 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec 13 02:17:22 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v213: 321 pgs: 1 activating+remapped, 1 peering, 319 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 208 B/s wr, 10 op/s; 6/251 objects misplaced (2.390%); 30 B/s, 1 objects/s recovering
Dec 13 02:17:22 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Dec 13 02:17:22 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Dec 13 02:17:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:17:22 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Dec 13 02:17:22 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Dec 13 02:17:23 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Dec 13 02:17:24 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Dec 13 02:17:24 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v214: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 170 B/s wr, 8 op/s; 50 B/s, 1 objects/s recovering
Dec 13 02:17:24 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 02:17:24 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:17:24 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Dec 13 02:17:24 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:17:24 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:17:24 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Dec 13 02:17:24 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Dec 13 02:17:24 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111 pruub=14.319065094s) [1] r=-1 lpr=111 pi=[67,111)/1 crt=43'551 active pruub 200.222534180s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:17:24 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111 pruub=14.319033623s) [1] r=-1 lpr=111 pi=[67,111)/1 crt=43'551 unknown NOTIFY pruub 200.222534180s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:17:24 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:17:25 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Dec 13 02:17:25 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:17:25 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Dec 13 02:17:25 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Dec 13 02:17:25 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:17:25 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:17:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:17:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:17:25 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Dec 13 02:17:25 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Dec 13 02:17:26 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v217: 321 pgs: 1 unknown, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:17:26 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Dec 13 02:17:26 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Dec 13 02:17:26 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Dec 13 02:17:26 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Dec 13 02:17:26 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Dec 13 02:17:26 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Dec 13 02:17:26 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:17:26 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Dec 13 02:17:27 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec 13 02:17:27 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec 13 02:17:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Dec 13 02:17:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Dec 13 02:17:27 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Dec 13 02:17:27 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114 pruub=15.336898804s) [1] async=[1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 active pruub 204.257400513s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:17:27 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114 pruub=15.336791992s) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY pruub 204.257400513s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:17:27 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:17:27 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:17:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:17:28 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Dec 13 02:17:28 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Dec 13 02:17:28 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v220: 321 pgs: 1 unknown, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:17:28 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Dec 13 02:17:28 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Dec 13 02:17:28 np0005558317 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Dec 13 02:17:28 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=114/115 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:17:28 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Dec 13 02:17:28 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Dec 13 02:17:28 np0005558317 systemd-logind[745]: New session 36 of user zuul.
Dec 13 02:17:28 np0005558317 systemd[1]: Started Session 36 of User zuul.
Dec 13 02:17:29 np0005558317 python3.9[99310]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:17:29 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Dec 13 02:17:29 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Dec 13 02:17:30 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v222: 321 pgs: 1 unknown, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:17:30 np0005558317 python3.9[99528]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:17:31 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Dec 13 02:17:31 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Dec 13 02:17:32 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.c scrub starts
Dec 13 02:17:32 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.c scrub ok
Dec 13 02:17:32 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v223: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 341 B/s wr, 8 op/s; 54 B/s, 2 objects/s recovering
Dec 13 02:17:32 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec 13 02:17:32 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Dec 13 02:17:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:17:33 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Dec 13 02:17:33 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Dec 13 02:17:34 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v224: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 261 B/s wr, 6 op/s; 42 B/s, 1 objects/s recovering
Dec 13 02:17:34 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Dec 13 02:17:34 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Dec 13 02:17:35 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec 13 02:17:35 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec 13 02:17:36 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Dec 13 02:17:36 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Dec 13 02:17:36 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v225: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 231 B/s wr, 5 op/s; 37 B/s, 1 objects/s recovering
Dec 13 02:17:36 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.a scrub starts
Dec 13 02:17:36 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.a scrub ok
Dec 13 02:17:36 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec 13 02:17:36 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec 13 02:17:37 np0005558317 systemd[1]: session-36.scope: Deactivated successfully.
Dec 13 02:17:37 np0005558317 systemd[1]: session-36.scope: Consumed 6.449s CPU time.
Dec 13 02:17:37 np0005558317 systemd-logind[745]: Session 36 logged out. Waiting for processes to exit.
Dec 13 02:17:37 np0005558317 systemd-logind[745]: Removed session 36.
Dec 13 02:17:37 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Dec 13 02:17:37 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Dec 13 02:17:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:17:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:17:38
Dec 13 02:17:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:17:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 02:17:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] pools ['.rgw.root', 'images', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups']
Dec 13 02:17:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:17:38 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v226: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 204 B/s wr, 5 op/s; 32 B/s, 1 objects/s recovering
Dec 13 02:17:38 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Dec 13 02:17:38 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Dec 13 02:17:38 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec 13 02:17:38 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec 13 02:17:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:17:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:17:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:17:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:17:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:17:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:17:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:17:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:17:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:17:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:17:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:17:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:17:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:17:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:17:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:17:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:17:39 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Dec 13 02:17:39 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Dec 13 02:17:39 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Dec 13 02:17:39 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Dec 13 02:17:39 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec 13 02:17:39 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec 13 02:17:40 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Dec 13 02:17:40 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Dec 13 02:17:40 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v227: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 173 B/s wr, 4 op/s; 27 B/s, 1 objects/s recovering
Dec 13 02:17:42 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v228: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 4 op/s; 27 B/s, 1 objects/s recovering
Dec 13 02:17:42 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Dec 13 02:17:42 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Dec 13 02:17:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:17:43 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Dec 13 02:17:43 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Dec 13 02:17:44 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v229: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:17:45 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Dec 13 02:17:45 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Dec 13 02:17:46 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v230: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:17:46 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.c scrub starts
Dec 13 02:17:46 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.c scrub ok
Dec 13 02:17:47 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Dec 13 02:17:47 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Dec 13 02:17:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:17:48 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v231: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:17:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:17:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:17:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:17:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:17:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:17:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:17:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:17:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:17:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:17:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:17:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:17:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:17:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 02:17:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:17:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:17:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:17:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:17:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:17:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:17:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:17:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:17:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:17:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:17:49 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Dec 13 02:17:49 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Dec 13 02:17:49 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Dec 13 02:17:49 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Dec 13 02:17:49 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Dec 13 02:17:49 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Dec 13 02:17:50 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Dec 13 02:17:50 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Dec 13 02:17:50 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v232: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:17:52 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v233: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:17:52 np0005558317 systemd-logind[745]: New session 37 of user zuul.
Dec 13 02:17:52 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.b scrub starts
Dec 13 02:17:52 np0005558317 systemd[1]: Started Session 37 of User zuul.
Dec 13 02:17:52 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.b scrub ok
Dec 13 02:17:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:17:53 np0005558317 python3.9[99738]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 13 02:17:53 np0005558317 python3.9[99912]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:17:54 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v234: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:17:54 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Dec 13 02:17:54 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Dec 13 02:17:54 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Dec 13 02:17:54 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Dec 13 02:17:54 np0005558317 python3.9[100068]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:17:55 np0005558317 python3.9[100221]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:17:55 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec 13 02:17:55 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec 13 02:17:56 np0005558317 python3.9[100375]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:17:56 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v235: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:17:56 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec 13 02:17:56 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec 13 02:17:56 np0005558317 python3.9[100527]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:17:57 np0005558317 python3.9[100677]: ansible-ansible.builtin.service_facts Invoked
Dec 13 02:17:57 np0005558317 network[100694]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 02:17:57 np0005558317 network[100695]: 'network-scripts' will be removed from distribution in near future.
Dec 13 02:17:57 np0005558317 network[100696]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 02:17:57 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Dec 13 02:17:57 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Dec 13 02:17:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:17:58 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v236: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:17:58 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.a scrub starts
Dec 13 02:17:58 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Dec 13 02:17:58 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.a scrub ok
Dec 13 02:17:58 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Dec 13 02:17:59 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Dec 13 02:17:59 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Dec 13 02:17:59 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Dec 13 02:17:59 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Dec 13 02:18:00 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v237: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:00 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Dec 13 02:18:00 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Dec 13 02:18:00 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec 13 02:18:00 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec 13 02:18:00 np0005558317 python3.9[100956]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:18:01 np0005558317 python3.9[101106]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:18:01 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Dec 13 02:18:01 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Dec 13 02:18:01 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec 13 02:18:01 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec 13 02:18:02 np0005558317 python3.9[101260]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:18:02 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v238: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:18:02 np0005558317 python3.9[101418]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:18:03 np0005558317 python3.9[101502]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:18:03 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Dec 13 02:18:03 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Dec 13 02:18:04 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Dec 13 02:18:04 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v239: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:04 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Dec 13 02:18:05 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Dec 13 02:18:05 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec 13 02:18:05 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Dec 13 02:18:05 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec 13 02:18:06 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v240: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:06 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Dec 13 02:18:06 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Dec 13 02:18:06 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Dec 13 02:18:06 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Dec 13 02:18:06 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec 13 02:18:06 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec 13 02:18:07 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec 13 02:18:07 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec 13 02:18:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:18:08 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v241: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:08 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Dec 13 02:18:08 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Dec 13 02:18:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:18:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:18:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:18:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:18:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:18:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:18:09 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Dec 13 02:18:09 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Dec 13 02:18:10 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v242: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:10 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Dec 13 02:18:10 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Dec 13 02:18:10 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Dec 13 02:18:10 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Dec 13 02:18:11 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Dec 13 02:18:11 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Dec 13 02:18:12 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v243: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:18:13 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Dec 13 02:18:13 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Dec 13 02:18:14 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v244: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:14 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Dec 13 02:18:14 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Dec 13 02:18:14 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:18:14 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:18:14 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:18:14 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:18:14 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:18:14 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:18:14 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:18:14 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:18:14 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:18:14 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:18:14 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:18:14 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:18:14 np0005558317 podman[101713]: 2025-12-13 07:18:14.92982564 +0000 UTC m=+0.026403177 container create a61e39debae75637337943f67cc2c269bef86d33dac2b27f81115e53ea3fb26b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_thompson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 02:18:14 np0005558317 systemd[1]: Started libpod-conmon-a61e39debae75637337943f67cc2c269bef86d33dac2b27f81115e53ea3fb26b.scope.
Dec 13 02:18:14 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:18:14 np0005558317 podman[101713]: 2025-12-13 07:18:14.976885842 +0000 UTC m=+0.073463378 container init a61e39debae75637337943f67cc2c269bef86d33dac2b27f81115e53ea3fb26b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:18:14 np0005558317 podman[101713]: 2025-12-13 07:18:14.981325012 +0000 UTC m=+0.077902549 container start a61e39debae75637337943f67cc2c269bef86d33dac2b27f81115e53ea3fb26b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_thompson, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:18:14 np0005558317 podman[101713]: 2025-12-13 07:18:14.982398382 +0000 UTC m=+0.078975918 container attach a61e39debae75637337943f67cc2c269bef86d33dac2b27f81115e53ea3fb26b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_thompson, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:18:14 np0005558317 goofy_thompson[101725]: 167 167
Dec 13 02:18:14 np0005558317 systemd[1]: libpod-a61e39debae75637337943f67cc2c269bef86d33dac2b27f81115e53ea3fb26b.scope: Deactivated successfully.
Dec 13 02:18:14 np0005558317 podman[101713]: 2025-12-13 07:18:14.986780804 +0000 UTC m=+0.083358342 container died a61e39debae75637337943f67cc2c269bef86d33dac2b27f81115e53ea3fb26b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_thompson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:18:15 np0005558317 systemd[1]: var-lib-containers-storage-overlay-e4087827a87befefe8b62d9e4fe30061ffb9ba55e9f8cc51256119b588913f5b-merged.mount: Deactivated successfully.
Dec 13 02:18:15 np0005558317 podman[101713]: 2025-12-13 07:18:15.007913208 +0000 UTC m=+0.104490745 container remove a61e39debae75637337943f67cc2c269bef86d33dac2b27f81115e53ea3fb26b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_thompson, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:18:15 np0005558317 podman[101713]: 2025-12-13 07:18:14.918634347 +0000 UTC m=+0.015211904 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:18:15 np0005558317 systemd[1]: libpod-conmon-a61e39debae75637337943f67cc2c269bef86d33dac2b27f81115e53ea3fb26b.scope: Deactivated successfully.
Dec 13 02:18:15 np0005558317 podman[101748]: 2025-12-13 07:18:15.124046828 +0000 UTC m=+0.027929090 container create 2def15fcea25daf4bcb8cb944bd814e421f217831a53eddf67ca9ed936ba460a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 02:18:15 np0005558317 systemd[1]: Started libpod-conmon-2def15fcea25daf4bcb8cb944bd814e421f217831a53eddf67ca9ed936ba460a.scope.
Dec 13 02:18:15 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:18:15 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cdb8224432ad0883b841ad2d80c583b714ec93354de428c19a764da1f502d36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:18:15 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cdb8224432ad0883b841ad2d80c583b714ec93354de428c19a764da1f502d36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:18:15 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cdb8224432ad0883b841ad2d80c583b714ec93354de428c19a764da1f502d36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:18:15 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cdb8224432ad0883b841ad2d80c583b714ec93354de428c19a764da1f502d36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:18:15 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cdb8224432ad0883b841ad2d80c583b714ec93354de428c19a764da1f502d36/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:18:15 np0005558317 podman[101748]: 2025-12-13 07:18:15.181765166 +0000 UTC m=+0.085647449 container init 2def15fcea25daf4bcb8cb944bd814e421f217831a53eddf67ca9ed936ba460a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_saha, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 02:18:15 np0005558317 podman[101748]: 2025-12-13 07:18:15.189040207 +0000 UTC m=+0.092922468 container start 2def15fcea25daf4bcb8cb944bd814e421f217831a53eddf67ca9ed936ba460a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_saha, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Dec 13 02:18:15 np0005558317 podman[101748]: 2025-12-13 07:18:15.189986726 +0000 UTC m=+0.093868988 container attach 2def15fcea25daf4bcb8cb944bd814e421f217831a53eddf67ca9ed936ba460a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_saha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 02:18:15 np0005558317 podman[101748]: 2025-12-13 07:18:15.113382442 +0000 UTC m=+0.017264724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:18:15 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Dec 13 02:18:15 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Dec 13 02:18:15 np0005558317 beautiful_saha[101761]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:18:15 np0005558317 beautiful_saha[101761]: --> All data devices are unavailable
Dec 13 02:18:15 np0005558317 systemd[1]: libpod-2def15fcea25daf4bcb8cb944bd814e421f217831a53eddf67ca9ed936ba460a.scope: Deactivated successfully.
Dec 13 02:18:15 np0005558317 podman[101748]: 2025-12-13 07:18:15.558108444 +0000 UTC m=+0.461990705 container died 2def15fcea25daf4bcb8cb944bd814e421f217831a53eddf67ca9ed936ba460a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_saha, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 02:18:15 np0005558317 systemd[1]: var-lib-containers-storage-overlay-1cdb8224432ad0883b841ad2d80c583b714ec93354de428c19a764da1f502d36-merged.mount: Deactivated successfully.
Dec 13 02:18:15 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:18:15 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:18:15 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:18:15 np0005558317 podman[101748]: 2025-12-13 07:18:15.583170736 +0000 UTC m=+0.487052997 container remove 2def15fcea25daf4bcb8cb944bd814e421f217831a53eddf67ca9ed936ba460a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_saha, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 02:18:15 np0005558317 systemd[1]: libpod-conmon-2def15fcea25daf4bcb8cb944bd814e421f217831a53eddf67ca9ed936ba460a.scope: Deactivated successfully.
Dec 13 02:18:15 np0005558317 podman[101851]: 2025-12-13 07:18:15.916348544 +0000 UTC m=+0.032260719 container create fe7a2b32c9028df0efb508436dc14e72f1f79050ec8ada51485c784f0b9496aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_murdock, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:18:15 np0005558317 systemd[1]: Started libpod-conmon-fe7a2b32c9028df0efb508436dc14e72f1f79050ec8ada51485c784f0b9496aa.scope.
Dec 13 02:18:15 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:18:15 np0005558317 podman[101851]: 2025-12-13 07:18:15.968006435 +0000 UTC m=+0.083918630 container init fe7a2b32c9028df0efb508436dc14e72f1f79050ec8ada51485c784f0b9496aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_murdock, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:18:15 np0005558317 podman[101851]: 2025-12-13 07:18:15.973149316 +0000 UTC m=+0.089061492 container start fe7a2b32c9028df0efb508436dc14e72f1f79050ec8ada51485c784f0b9496aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:18:15 np0005558317 podman[101851]: 2025-12-13 07:18:15.974379481 +0000 UTC m=+0.090291657 container attach fe7a2b32c9028df0efb508436dc14e72f1f79050ec8ada51485c784f0b9496aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_murdock, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 02:18:15 np0005558317 dreamy_murdock[101865]: 167 167
Dec 13 02:18:15 np0005558317 podman[101851]: 2025-12-13 07:18:15.975789275 +0000 UTC m=+0.091701451 container died fe7a2b32c9028df0efb508436dc14e72f1f79050ec8ada51485c784f0b9496aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 02:18:15 np0005558317 systemd[1]: libpod-fe7a2b32c9028df0efb508436dc14e72f1f79050ec8ada51485c784f0b9496aa.scope: Deactivated successfully.
Dec 13 02:18:15 np0005558317 systemd[1]: var-lib-containers-storage-overlay-f982b0683540cf617b44a648a576920fda045816fe40783b5d3100b300e636b5-merged.mount: Deactivated successfully.
Dec 13 02:18:15 np0005558317 podman[101851]: 2025-12-13 07:18:15.996068257 +0000 UTC m=+0.111980431 container remove fe7a2b32c9028df0efb508436dc14e72f1f79050ec8ada51485c784f0b9496aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_murdock, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 02:18:15 np0005558317 podman[101851]: 2025-12-13 07:18:15.90584502 +0000 UTC m=+0.021757205 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:18:16 np0005558317 systemd[1]: libpod-conmon-fe7a2b32c9028df0efb508436dc14e72f1f79050ec8ada51485c784f0b9496aa.scope: Deactivated successfully.
Dec 13 02:18:16 np0005558317 podman[101887]: 2025-12-13 07:18:16.108920726 +0000 UTC m=+0.027710328 container create c6cbfb303e72f8df25c509765155d1d3738c032b8d92b3739bc4522d53ea7c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sanderson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 02:18:16 np0005558317 systemd[1]: Started libpod-conmon-c6cbfb303e72f8df25c509765155d1d3738c032b8d92b3739bc4522d53ea7c1a.scope.
Dec 13 02:18:16 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:18:16 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96f18becab63c26534d0e988a01d6057a413f22a9c5283d43cc977bf9e92f7fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:18:16 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96f18becab63c26534d0e988a01d6057a413f22a9c5283d43cc977bf9e92f7fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:18:16 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96f18becab63c26534d0e988a01d6057a413f22a9c5283d43cc977bf9e92f7fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:18:16 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96f18becab63c26534d0e988a01d6057a413f22a9c5283d43cc977bf9e92f7fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:18:16 np0005558317 podman[101887]: 2025-12-13 07:18:16.160856642 +0000 UTC m=+0.079646266 container init c6cbfb303e72f8df25c509765155d1d3738c032b8d92b3739bc4522d53ea7c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sanderson, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 02:18:16 np0005558317 podman[101887]: 2025-12-13 07:18:16.166094553 +0000 UTC m=+0.084884156 container start c6cbfb303e72f8df25c509765155d1d3738c032b8d92b3739bc4522d53ea7c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 02:18:16 np0005558317 podman[101887]: 2025-12-13 07:18:16.169199182 +0000 UTC m=+0.087988784 container attach c6cbfb303e72f8df25c509765155d1d3738c032b8d92b3739bc4522d53ea7c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sanderson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 02:18:16 np0005558317 podman[101887]: 2025-12-13 07:18:16.098142693 +0000 UTC m=+0.016932316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:18:16 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v245: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:16 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.d scrub starts
Dec 13 02:18:16 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.d scrub ok
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]: {
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:    "0": [
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:        {
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "devices": [
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "/dev/loop3"
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            ],
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "lv_name": "ceph_lv0",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "lv_size": "21470642176",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "name": "ceph_lv0",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "tags": {
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.cluster_name": "ceph",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.crush_device_class": "",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.encrypted": "0",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.objectstore": "bluestore",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.osd_id": "0",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.type": "block",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.vdo": "0",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.with_tpm": "0"
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            },
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "type": "block",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "vg_name": "ceph_vg0"
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:        }
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:    ],
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:    "1": [
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:        {
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "devices": [
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "/dev/loop4"
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            ],
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "lv_name": "ceph_lv1",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "lv_size": "21470642176",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "name": "ceph_lv1",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "tags": {
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.cluster_name": "ceph",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.crush_device_class": "",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.encrypted": "0",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.objectstore": "bluestore",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.osd_id": "1",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.type": "block",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.vdo": "0",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.with_tpm": "0"
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            },
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "type": "block",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "vg_name": "ceph_vg1"
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:        }
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:    ],
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:    "2": [
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:        {
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "devices": [
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "/dev/loop5"
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            ],
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "lv_name": "ceph_lv2",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "lv_size": "21470642176",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "name": "ceph_lv2",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "tags": {
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.cluster_name": "ceph",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.crush_device_class": "",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.encrypted": "0",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.objectstore": "bluestore",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.osd_id": "2",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.type": "block",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.vdo": "0",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:                "ceph.with_tpm": "0"
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            },
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "type": "block",
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:            "vg_name": "ceph_vg2"
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:        }
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]:    ]
Dec 13 02:18:16 np0005558317 festive_sanderson[101900]: }
Dec 13 02:18:16 np0005558317 systemd[1]: libpod-c6cbfb303e72f8df25c509765155d1d3738c032b8d92b3739bc4522d53ea7c1a.scope: Deactivated successfully.
Dec 13 02:18:16 np0005558317 podman[101887]: 2025-12-13 07:18:16.406789133 +0000 UTC m=+0.325578736 container died c6cbfb303e72f8df25c509765155d1d3738c032b8d92b3739bc4522d53ea7c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sanderson, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 02:18:16 np0005558317 systemd[1]: var-lib-containers-storage-overlay-96f18becab63c26534d0e988a01d6057a413f22a9c5283d43cc977bf9e92f7fa-merged.mount: Deactivated successfully.
Dec 13 02:18:16 np0005558317 podman[101887]: 2025-12-13 07:18:16.42811896 +0000 UTC m=+0.346908563 container remove c6cbfb303e72f8df25c509765155d1d3738c032b8d92b3739bc4522d53ea7c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sanderson, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:18:16 np0005558317 systemd[1]: libpod-conmon-c6cbfb303e72f8df25c509765155d1d3738c032b8d92b3739bc4522d53ea7c1a.scope: Deactivated successfully.
Dec 13 02:18:16 np0005558317 podman[101979]: 2025-12-13 07:18:16.764056335 +0000 UTC m=+0.026314910 container create 577dda61e8804ed7bd28c739e1bd78ab51ea2357605f3e81dba760e54fe73cb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_turing, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:18:16 np0005558317 systemd[1]: Started libpod-conmon-577dda61e8804ed7bd28c739e1bd78ab51ea2357605f3e81dba760e54fe73cb0.scope.
Dec 13 02:18:16 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:18:16 np0005558317 podman[101979]: 2025-12-13 07:18:16.821644275 +0000 UTC m=+0.083902860 container init 577dda61e8804ed7bd28c739e1bd78ab51ea2357605f3e81dba760e54fe73cb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:18:16 np0005558317 podman[101979]: 2025-12-13 07:18:16.826815109 +0000 UTC m=+0.089073674 container start 577dda61e8804ed7bd28c739e1bd78ab51ea2357605f3e81dba760e54fe73cb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_turing, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 02:18:16 np0005558317 podman[101979]: 2025-12-13 07:18:16.827777999 +0000 UTC m=+0.090036563 container attach 577dda61e8804ed7bd28c739e1bd78ab51ea2357605f3e81dba760e54fe73cb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_turing, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:18:16 np0005558317 zealous_turing[101992]: 167 167
Dec 13 02:18:16 np0005558317 systemd[1]: libpod-577dda61e8804ed7bd28c739e1bd78ab51ea2357605f3e81dba760e54fe73cb0.scope: Deactivated successfully.
Dec 13 02:18:16 np0005558317 podman[101979]: 2025-12-13 07:18:16.829374496 +0000 UTC m=+0.091633061 container died 577dda61e8804ed7bd28c739e1bd78ab51ea2357605f3e81dba760e54fe73cb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 02:18:16 np0005558317 systemd[1]: var-lib-containers-storage-overlay-a2fbe5c6fbb85f1cbf6f65b9ce91f1358e51bc2b2d5eeabfeb677cce6e785734-merged.mount: Deactivated successfully.
Dec 13 02:18:16 np0005558317 podman[101979]: 2025-12-13 07:18:16.84533268 +0000 UTC m=+0.107591245 container remove 577dda61e8804ed7bd28c739e1bd78ab51ea2357605f3e81dba760e54fe73cb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 02:18:16 np0005558317 podman[101979]: 2025-12-13 07:18:16.753901521 +0000 UTC m=+0.016160106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:18:16 np0005558317 systemd[1]: libpod-conmon-577dda61e8804ed7bd28c739e1bd78ab51ea2357605f3e81dba760e54fe73cb0.scope: Deactivated successfully.
Dec 13 02:18:16 np0005558317 podman[102013]: 2025-12-13 07:18:16.953056664 +0000 UTC m=+0.025861754 container create 58a39ab197f7066601e2a5169f93687835bab978301decd31ff754723f2ac1cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sammet, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:18:16 np0005558317 systemd[1]: Started libpod-conmon-58a39ab197f7066601e2a5169f93687835bab978301decd31ff754723f2ac1cf.scope.
Dec 13 02:18:16 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:18:16 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/982a89a27ea56fa0ab758d913b42ffd7aa1aeb6d20eba840c04ce1d1e06cce73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:18:16 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/982a89a27ea56fa0ab758d913b42ffd7aa1aeb6d20eba840c04ce1d1e06cce73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:18:16 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/982a89a27ea56fa0ab758d913b42ffd7aa1aeb6d20eba840c04ce1d1e06cce73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:18:16 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/982a89a27ea56fa0ab758d913b42ffd7aa1aeb6d20eba840c04ce1d1e06cce73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:18:16 np0005558317 podman[102013]: 2025-12-13 07:18:16.998316353 +0000 UTC m=+0.071121453 container init 58a39ab197f7066601e2a5169f93687835bab978301decd31ff754723f2ac1cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sammet, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 02:18:17 np0005558317 podman[102013]: 2025-12-13 07:18:17.006264286 +0000 UTC m=+0.079069384 container start 58a39ab197f7066601e2a5169f93687835bab978301decd31ff754723f2ac1cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 02:18:17 np0005558317 podman[102013]: 2025-12-13 07:18:17.007390154 +0000 UTC m=+0.080195253 container attach 58a39ab197f7066601e2a5169f93687835bab978301decd31ff754723f2ac1cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sammet, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 02:18:17 np0005558317 podman[102013]: 2025-12-13 07:18:16.942968717 +0000 UTC m=+0.015773816 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:18:17 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Dec 13 02:18:17 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Dec 13 02:18:17 np0005558317 lvm[102104]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:18:17 np0005558317 lvm[102103]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:18:17 np0005558317 lvm[102104]: VG ceph_vg1 finished
Dec 13 02:18:17 np0005558317 lvm[102103]: VG ceph_vg0 finished
Dec 13 02:18:17 np0005558317 lvm[102107]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:18:17 np0005558317 lvm[102107]: VG ceph_vg2 finished
Dec 13 02:18:17 np0005558317 sleepy_sammet[102026]: {}
Dec 13 02:18:17 np0005558317 systemd[1]: libpod-58a39ab197f7066601e2a5169f93687835bab978301decd31ff754723f2ac1cf.scope: Deactivated successfully.
Dec 13 02:18:17 np0005558317 podman[102013]: 2025-12-13 07:18:17.623510539 +0000 UTC m=+0.696315627 container died 58a39ab197f7066601e2a5169f93687835bab978301decd31ff754723f2ac1cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 02:18:17 np0005558317 systemd[1]: var-lib-containers-storage-overlay-982a89a27ea56fa0ab758d913b42ffd7aa1aeb6d20eba840c04ce1d1e06cce73-merged.mount: Deactivated successfully.
Dec 13 02:18:17 np0005558317 podman[102013]: 2025-12-13 07:18:17.648091891 +0000 UTC m=+0.720896990 container remove 58a39ab197f7066601e2a5169f93687835bab978301decd31ff754723f2ac1cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sammet, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:18:17 np0005558317 systemd[1]: libpod-conmon-58a39ab197f7066601e2a5169f93687835bab978301decd31ff754723f2ac1cf.scope: Deactivated successfully.
Dec 13 02:18:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:18:17 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:18:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:18:17 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:18:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:18:18 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v246: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:18 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.d scrub starts
Dec 13 02:18:18 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec 13 02:18:18 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.d scrub ok
Dec 13 02:18:18 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec 13 02:18:18 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:18:18 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:18:19 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec 13 02:18:19 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec 13 02:18:19 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Dec 13 02:18:19 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Dec 13 02:18:19 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Dec 13 02:18:19 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Dec 13 02:18:20 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v247: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:20 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.c scrub starts
Dec 13 02:18:20 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.c scrub ok
Dec 13 02:18:20 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.f scrub starts
Dec 13 02:18:20 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.f scrub ok
Dec 13 02:18:21 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec 13 02:18:21 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec 13 02:18:21 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Dec 13 02:18:21 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Dec 13 02:18:22 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v248: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:22 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Dec 13 02:18:22 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Dec 13 02:18:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:18:23 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec 13 02:18:23 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec 13 02:18:24 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v249: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:24 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Dec 13 02:18:24 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Dec 13 02:18:24 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.e scrub starts
Dec 13 02:18:24 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.e scrub ok
Dec 13 02:18:25 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Dec 13 02:18:25 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Dec 13 02:18:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Dec 13 02:18:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Dec 13 02:18:25 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.c scrub starts
Dec 13 02:18:25 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.c scrub ok
Dec 13 02:18:26 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v250: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:27 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Dec 13 02:18:27 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Dec 13 02:18:27 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Dec 13 02:18:27 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Dec 13 02:18:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:18:28 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v251: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Dec 13 02:18:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Dec 13 02:18:30 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v252: 321 pgs: 321 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:30 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Dec 13 02:18:30 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Dec 13 02:18:31 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Dec 13 02:18:31 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Dec 13 02:18:32 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v253: 321 pgs: 321 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:18:33 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Dec 13 02:18:33 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Dec 13 02:18:34 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v254: 321 pgs: 1 active+clean+scrubbing, 320 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:34 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Dec 13 02:18:34 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Dec 13 02:18:34 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Dec 13 02:18:34 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Dec 13 02:18:36 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v255: 321 pgs: 1 active+clean+scrubbing, 320 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:36 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Dec 13 02:18:36 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Dec 13 02:18:36 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Dec 13 02:18:36 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Dec 13 02:18:37 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Dec 13 02:18:37 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Dec 13 02:18:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:18:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:18:38
Dec 13 02:18:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:18:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 02:18:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'backups', '.mgr']
Dec 13 02:18:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:18:38 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v256: 321 pgs: 1 active+clean+scrubbing, 320 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:38 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Dec 13 02:18:38 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Dec 13 02:18:38 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Dec 13 02:18:38 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Dec 13 02:18:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:18:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:18:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:18:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:18:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:18:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:18:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:18:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:18:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:18:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:18:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:18:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:18:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:18:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:18:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:18:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:18:39 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Dec 13 02:18:39 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Dec 13 02:18:40 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v257: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:40 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Dec 13 02:18:40 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Dec 13 02:18:40 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Dec 13 02:18:40 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Dec 13 02:18:41 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Dec 13 02:18:41 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Dec 13 02:18:41 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Dec 13 02:18:41 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Dec 13 02:18:41 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Dec 13 02:18:41 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Dec 13 02:18:42 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v258: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:42 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Dec 13 02:18:42 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Dec 13 02:18:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:18:43 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Dec 13 02:18:43 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Dec 13 02:18:43 np0005558317 python3.9[102370]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:18:43 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.a scrub starts
Dec 13 02:18:43 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.a scrub ok
Dec 13 02:18:44 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v259: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:44 np0005558317 python3.9[102657]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 13 02:18:45 np0005558317 python3.9[102809]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 13 02:18:45 np0005558317 python3.9[102961]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:18:46 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v260: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:46 np0005558317 python3.9[103113]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 13 02:18:46 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Dec 13 02:18:46 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Dec 13 02:18:47 np0005558317 python3.9[103265]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:18:47 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Dec 13 02:18:47 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Dec 13 02:18:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:18:47 np0005558317 python3.9[103417]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:18:48 np0005558317 python3.9[103495]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:18:48 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v261: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:48 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.c scrub starts
Dec 13 02:18:48 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.c scrub ok
Dec 13 02:18:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:18:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:18:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:18:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:18:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:18:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:18:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:18:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:18:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:18:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:18:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:18:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:18:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 02:18:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:18:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:18:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:18:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:18:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:18:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:18:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:18:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:18:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:18:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:18:48 np0005558317 python3.9[103647]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:18:49 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Dec 13 02:18:49 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Dec 13 02:18:49 np0005558317 python3.9[103801]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 13 02:18:50 np0005558317 python3.9[103954]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 13 02:18:50 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.e scrub starts
Dec 13 02:18:50 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.e scrub ok
Dec 13 02:18:50 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v262: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:50 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Dec 13 02:18:50 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Dec 13 02:18:50 np0005558317 python3.9[104107]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 13 02:18:51 np0005558317 python3.9[104259]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 13 02:18:51 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Dec 13 02:18:51 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Dec 13 02:18:51 np0005558317 python3.9[104411]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:18:52 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Dec 13 02:18:52 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Dec 13 02:18:52 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v263: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:52 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Dec 13 02:18:52 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Dec 13 02:18:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:18:53 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.a scrub starts
Dec 13 02:18:53 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Dec 13 02:18:53 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.a scrub ok
Dec 13 02:18:53 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Dec 13 02:18:53 np0005558317 python3.9[104564]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:18:53 np0005558317 python3.9[104716]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:18:54 np0005558317 python3.9[104794]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:18:54 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v264: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:54 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Dec 13 02:18:54 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Dec 13 02:18:54 np0005558317 python3.9[104946]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:18:55 np0005558317 python3.9[105024]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:18:55 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Dec 13 02:18:55 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Dec 13 02:18:55 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.e scrub starts
Dec 13 02:18:55 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.e scrub ok
Dec 13 02:18:55 np0005558317 python3.9[105176]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:18:56 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v265: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:57 np0005558317 python3.9[105327]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:18:57 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Dec 13 02:18:57 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Dec 13 02:18:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:18:57 np0005558317 python3.9[105479]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 13 02:18:58 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v266: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:18:58 np0005558317 python3.9[105629]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:18:58 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Dec 13 02:18:58 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Dec 13 02:18:59 np0005558317 python3.9[105781]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:18:59 np0005558317 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 13 02:18:59 np0005558317 systemd[1]: tuned.service: Deactivated successfully.
Dec 13 02:18:59 np0005558317 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 13 02:18:59 np0005558317 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 13 02:18:59 np0005558317 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 13 02:19:00 np0005558317 python3.9[105942]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 13 02:19:00 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v267: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:01 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.e scrub starts
Dec 13 02:19:01 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.e scrub ok
Dec 13 02:19:01 np0005558317 python3.9[106094]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:19:02 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v268: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:02 np0005558317 python3.9[106248]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:19:02 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Dec 13 02:19:02 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Dec 13 02:19:02 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.d scrub starts
Dec 13 02:19:02 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.d scrub ok
Dec 13 02:19:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:19:02 np0005558317 systemd[1]: session-37.scope: Deactivated successfully.
Dec 13 02:19:02 np0005558317 systemd[1]: session-37.scope: Consumed 48.134s CPU time.
Dec 13 02:19:02 np0005558317 systemd-logind[745]: Session 37 logged out. Waiting for processes to exit.
Dec 13 02:19:02 np0005558317 systemd-logind[745]: Removed session 37.
Dec 13 02:19:03 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Dec 13 02:19:03 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Dec 13 02:19:03 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Dec 13 02:19:03 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Dec 13 02:19:04 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.c scrub starts
Dec 13 02:19:04 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v269: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:04 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.c scrub ok
Dec 13 02:19:05 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Dec 13 02:19:05 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Dec 13 02:19:06 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v270: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:07 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.f scrub starts
Dec 13 02:19:07 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.f scrub ok
Dec 13 02:19:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:19:08 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v271: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:08 np0005558317 systemd-logind[745]: New session 38 of user zuul.
Dec 13 02:19:08 np0005558317 systemd[1]: Started Session 38 of User zuul.
Dec 13 02:19:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:19:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:19:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:19:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:19:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:19:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:19:09 np0005558317 python3.9[106428]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:19:10 np0005558317 python3.9[106584]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 13 02:19:10 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v272: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:10 np0005558317 python3.9[106737]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:19:11 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.f scrub starts
Dec 13 02:19:11 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.f scrub ok
Dec 13 02:19:11 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Dec 13 02:19:11 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Dec 13 02:19:11 np0005558317 python3.9[106821]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 13 02:19:12 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Dec 13 02:19:12 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v273: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:12 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Dec 13 02:19:12 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Dec 13 02:19:12 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Dec 13 02:19:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:19:13 np0005558317 python3.9[106974]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:19:14 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v274: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:14 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Dec 13 02:19:14 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Dec 13 02:19:14 np0005558317 python3.9[107127]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 02:19:15 np0005558317 python3.9[107280]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:19:15 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Dec 13 02:19:15 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Dec 13 02:19:16 np0005558317 python3.9[107432]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 13 02:19:16 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v275: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:16 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Dec 13 02:19:16 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Dec 13 02:19:16 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.b scrub starts
Dec 13 02:19:16 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.b scrub ok
Dec 13 02:19:16 np0005558317 python3.9[107582]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:19:17 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Dec 13 02:19:17 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Dec 13 02:19:17 np0005558317 python3.9[107740]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:19:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:19:18 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:19:18 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:19:18 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:19:18 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:19:18 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:19:18 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:19:18 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:19:18 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:19:18 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:19:18 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:19:18 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:19:18 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:19:18 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v276: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:18 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:19:18 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:19:18 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:19:18 np0005558317 podman[107882]: 2025-12-13 07:19:18.511645749 +0000 UTC m=+0.026316586 container create cbd959b4c67faa3b04e081a96dd1264ebae14b105f50adf9193df32640e0a842 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_moser, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 02:19:18 np0005558317 systemd[76210]: Created slice User Background Tasks Slice.
Dec 13 02:19:18 np0005558317 systemd[1]: Started libpod-conmon-cbd959b4c67faa3b04e081a96dd1264ebae14b105f50adf9193df32640e0a842.scope.
Dec 13 02:19:18 np0005558317 systemd[76210]: Starting Cleanup of User's Temporary Files and Directories...
Dec 13 02:19:18 np0005558317 systemd[76210]: Finished Cleanup of User's Temporary Files and Directories.
Dec 13 02:19:18 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:19:18 np0005558317 podman[107882]: 2025-12-13 07:19:18.550824148 +0000 UTC m=+0.065494995 container init cbd959b4c67faa3b04e081a96dd1264ebae14b105f50adf9193df32640e0a842 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_moser, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 02:19:18 np0005558317 podman[107882]: 2025-12-13 07:19:18.556671691 +0000 UTC m=+0.071342508 container start cbd959b4c67faa3b04e081a96dd1264ebae14b105f50adf9193df32640e0a842 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 02:19:18 np0005558317 podman[107882]: 2025-12-13 07:19:18.558461803 +0000 UTC m=+0.073132629 container attach cbd959b4c67faa3b04e081a96dd1264ebae14b105f50adf9193df32640e0a842 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_moser, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:19:18 np0005558317 admiring_moser[107897]: 167 167
Dec 13 02:19:18 np0005558317 systemd[1]: libpod-cbd959b4c67faa3b04e081a96dd1264ebae14b105f50adf9193df32640e0a842.scope: Deactivated successfully.
Dec 13 02:19:18 np0005558317 podman[107882]: 2025-12-13 07:19:18.560894194 +0000 UTC m=+0.075565022 container died cbd959b4c67faa3b04e081a96dd1264ebae14b105f50adf9193df32640e0a842 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 02:19:18 np0005558317 systemd[1]: var-lib-containers-storage-overlay-1bf3fe1c90b19e3f1b7a5a6ac32a3fd91e83ec5ecee55183848eb0b9d99aa31e-merged.mount: Deactivated successfully.
Dec 13 02:19:18 np0005558317 podman[107882]: 2025-12-13 07:19:18.578091253 +0000 UTC m=+0.092762080 container remove cbd959b4c67faa3b04e081a96dd1264ebae14b105f50adf9193df32640e0a842 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_moser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 02:19:18 np0005558317 podman[107882]: 2025-12-13 07:19:18.501425319 +0000 UTC m=+0.016096166 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:19:18 np0005558317 systemd[1]: libpod-conmon-cbd959b4c67faa3b04e081a96dd1264ebae14b105f50adf9193df32640e0a842.scope: Deactivated successfully.
Dec 13 02:19:18 np0005558317 podman[107919]: 2025-12-13 07:19:18.695336814 +0000 UTC m=+0.030295530 container create 829d96c277523a10477ad48318abda630036a29c6a8c22d6ab30f2a9592c0d91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_carver, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:19:18 np0005558317 systemd[1]: Started libpod-conmon-829d96c277523a10477ad48318abda630036a29c6a8c22d6ab30f2a9592c0d91.scope.
Dec 13 02:19:18 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:19:18 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5bad39328723fb4bc64534ca9a9877d734ce9303056f7175f3960876af5a4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:19:18 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5bad39328723fb4bc64534ca9a9877d734ce9303056f7175f3960876af5a4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:19:18 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5bad39328723fb4bc64534ca9a9877d734ce9303056f7175f3960876af5a4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:19:18 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5bad39328723fb4bc64534ca9a9877d734ce9303056f7175f3960876af5a4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:19:18 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5bad39328723fb4bc64534ca9a9877d734ce9303056f7175f3960876af5a4d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:19:18 np0005558317 podman[107919]: 2025-12-13 07:19:18.761897846 +0000 UTC m=+0.096856562 container init 829d96c277523a10477ad48318abda630036a29c6a8c22d6ab30f2a9592c0d91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_carver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:19:18 np0005558317 podman[107919]: 2025-12-13 07:19:18.766645286 +0000 UTC m=+0.101603983 container start 829d96c277523a10477ad48318abda630036a29c6a8c22d6ab30f2a9592c0d91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 02:19:18 np0005558317 podman[107919]: 2025-12-13 07:19:18.768137577 +0000 UTC m=+0.103096293 container attach 829d96c277523a10477ad48318abda630036a29c6a8c22d6ab30f2a9592c0d91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_carver, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 02:19:18 np0005558317 podman[107919]: 2025-12-13 07:19:18.683978832 +0000 UTC m=+0.018937558 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:19:19 np0005558317 eager_carver[107956]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:19:19 np0005558317 eager_carver[107956]: --> All data devices are unavailable
Dec 13 02:19:19 np0005558317 systemd[1]: libpod-829d96c277523a10477ad48318abda630036a29c6a8c22d6ab30f2a9592c0d91.scope: Deactivated successfully.
Dec 13 02:19:19 np0005558317 conmon[107956]: conmon 829d96c277523a10477a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-829d96c277523a10477ad48318abda630036a29c6a8c22d6ab30f2a9592c0d91.scope/container/memory.events
Dec 13 02:19:19 np0005558317 podman[107919]: 2025-12-13 07:19:19.141747646 +0000 UTC m=+0.476706352 container died 829d96c277523a10477ad48318abda630036a29c6a8c22d6ab30f2a9592c0d91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_carver, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:19:19 np0005558317 systemd[1]: var-lib-containers-storage-overlay-7b5bad39328723fb4bc64534ca9a9877d734ce9303056f7175f3960876af5a4d-merged.mount: Deactivated successfully.
Dec 13 02:19:19 np0005558317 podman[107919]: 2025-12-13 07:19:19.165153398 +0000 UTC m=+0.500112104 container remove 829d96c277523a10477ad48318abda630036a29c6a8c22d6ab30f2a9592c0d91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_carver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 02:19:19 np0005558317 systemd[1]: libpod-conmon-829d96c277523a10477ad48318abda630036a29c6a8c22d6ab30f2a9592c0d91.scope: Deactivated successfully.
Dec 13 02:19:19 np0005558317 python3.9[108102]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:19:19 np0005558317 podman[108181]: 2025-12-13 07:19:19.505200525 +0000 UTC m=+0.028010496 container create 14386efd1faaf948bad73d74ba9f5d45be89a5d22539a15e89cd8c118c8a16f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ganguly, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 02:19:19 np0005558317 systemd[1]: Started libpod-conmon-14386efd1faaf948bad73d74ba9f5d45be89a5d22539a15e89cd8c118c8a16f1.scope.
Dec 13 02:19:19 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:19:19 np0005558317 podman[108181]: 2025-12-13 07:19:19.553740716 +0000 UTC m=+0.076550687 container init 14386efd1faaf948bad73d74ba9f5d45be89a5d22539a15e89cd8c118c8a16f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:19:19 np0005558317 podman[108181]: 2025-12-13 07:19:19.558088174 +0000 UTC m=+0.080898145 container start 14386efd1faaf948bad73d74ba9f5d45be89a5d22539a15e89cd8c118c8a16f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ganguly, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 02:19:19 np0005558317 boring_ganguly[108194]: 167 167
Dec 13 02:19:19 np0005558317 systemd[1]: libpod-14386efd1faaf948bad73d74ba9f5d45be89a5d22539a15e89cd8c118c8a16f1.scope: Deactivated successfully.
Dec 13 02:19:19 np0005558317 podman[108181]: 2025-12-13 07:19:19.561525427 +0000 UTC m=+0.084335399 container attach 14386efd1faaf948bad73d74ba9f5d45be89a5d22539a15e89cd8c118c8a16f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ganguly, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 02:19:19 np0005558317 podman[108181]: 2025-12-13 07:19:19.56175469 +0000 UTC m=+0.084564660 container died 14386efd1faaf948bad73d74ba9f5d45be89a5d22539a15e89cd8c118c8a16f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 02:19:19 np0005558317 systemd[1]: var-lib-containers-storage-overlay-2023caa6f189106792b0e203dfd46e7ee947f491e000559a7150e25e8e3b64c8-merged.mount: Deactivated successfully.
Dec 13 02:19:19 np0005558317 podman[108181]: 2025-12-13 07:19:19.577360442 +0000 UTC m=+0.100170413 container remove 14386efd1faaf948bad73d74ba9f5d45be89a5d22539a15e89cd8c118c8a16f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ganguly, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Dec 13 02:19:19 np0005558317 podman[108181]: 2025-12-13 07:19:19.493187119 +0000 UTC m=+0.015997110 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:19:19 np0005558317 systemd[1]: libpod-conmon-14386efd1faaf948bad73d74ba9f5d45be89a5d22539a15e89cd8c118c8a16f1.scope: Deactivated successfully.
Dec 13 02:19:19 np0005558317 podman[108324]: 2025-12-13 07:19:19.695251047 +0000 UTC m=+0.027677119 container create cbe6b89aede74110a9aa1715475ed2d6b4f07bfb210565644f666dd08e57eba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_albattani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 02:19:19 np0005558317 systemd[1]: Started libpod-conmon-cbe6b89aede74110a9aa1715475ed2d6b4f07bfb210565644f666dd08e57eba6.scope.
Dec 13 02:19:19 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:19:19 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9301518057fdbd103026b9bdcc04b54cf8cdd6557fda654ddf2ed891af1880e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:19:19 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9301518057fdbd103026b9bdcc04b54cf8cdd6557fda654ddf2ed891af1880e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:19:19 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9301518057fdbd103026b9bdcc04b54cf8cdd6557fda654ddf2ed891af1880e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:19:19 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9301518057fdbd103026b9bdcc04b54cf8cdd6557fda654ddf2ed891af1880e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:19:19 np0005558317 podman[108324]: 2025-12-13 07:19:19.750785441 +0000 UTC m=+0.083211532 container init cbe6b89aede74110a9aa1715475ed2d6b4f07bfb210565644f666dd08e57eba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 02:19:19 np0005558317 podman[108324]: 2025-12-13 07:19:19.755911536 +0000 UTC m=+0.088337607 container start cbe6b89aede74110a9aa1715475ed2d6b4f07bfb210565644f666dd08e57eba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_albattani, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 02:19:19 np0005558317 podman[108324]: 2025-12-13 07:19:19.757240279 +0000 UTC m=+0.089666360 container attach cbe6b89aede74110a9aa1715475ed2d6b4f07bfb210565644f666dd08e57eba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_albattani, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:19:19 np0005558317 podman[108324]: 2025-12-13 07:19:19.683723136 +0000 UTC m=+0.016149227 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:19:19 np0005558317 clever_albattani[108358]: {
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:    "0": [
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:        {
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "devices": [
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "/dev/loop3"
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            ],
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "lv_name": "ceph_lv0",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "lv_size": "21470642176",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "name": "ceph_lv0",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "tags": {
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.cluster_name": "ceph",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.crush_device_class": "",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.encrypted": "0",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.objectstore": "bluestore",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.osd_id": "0",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.type": "block",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.vdo": "0",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.with_tpm": "0"
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            },
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "type": "block",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "vg_name": "ceph_vg0"
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:        }
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:    ],
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:    "1": [
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:        {
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "devices": [
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "/dev/loop4"
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            ],
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "lv_name": "ceph_lv1",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "lv_size": "21470642176",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "name": "ceph_lv1",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "tags": {
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.cluster_name": "ceph",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.crush_device_class": "",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.encrypted": "0",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.objectstore": "bluestore",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.osd_id": "1",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.type": "block",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.vdo": "0",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.with_tpm": "0"
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            },
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "type": "block",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "vg_name": "ceph_vg1"
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:        }
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:    ],
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:    "2": [
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:        {
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "devices": [
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "/dev/loop5"
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            ],
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "lv_name": "ceph_lv2",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "lv_size": "21470642176",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "name": "ceph_lv2",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "tags": {
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.cluster_name": "ceph",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.crush_device_class": "",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.encrypted": "0",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.objectstore": "bluestore",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.osd_id": "2",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.type": "block",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.vdo": "0",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:                "ceph.with_tpm": "0"
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            },
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "type": "block",
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:            "vg_name": "ceph_vg2"
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:        }
Dec 13 02:19:19 np0005558317 clever_albattani[108358]:    ]
Dec 13 02:19:19 np0005558317 clever_albattani[108358]: }
Dec 13 02:19:19 np0005558317 podman[108324]: 2025-12-13 07:19:19.987960036 +0000 UTC m=+0.320386117 container died cbe6b89aede74110a9aa1715475ed2d6b4f07bfb210565644f666dd08e57eba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_albattani, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:19:19 np0005558317 systemd[1]: libpod-cbe6b89aede74110a9aa1715475ed2d6b4f07bfb210565644f666dd08e57eba6.scope: Deactivated successfully.
Dec 13 02:19:20 np0005558317 systemd[1]: var-lib-containers-storage-overlay-9301518057fdbd103026b9bdcc04b54cf8cdd6557fda654ddf2ed891af1880e0-merged.mount: Deactivated successfully.
Dec 13 02:19:20 np0005558317 podman[108324]: 2025-12-13 07:19:20.01132933 +0000 UTC m=+0.343755401 container remove cbe6b89aede74110a9aa1715475ed2d6b4f07bfb210565644f666dd08e57eba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_albattani, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 02:19:20 np0005558317 systemd[1]: libpod-conmon-cbe6b89aede74110a9aa1715475ed2d6b4f07bfb210565644f666dd08e57eba6.scope: Deactivated successfully.
Dec 13 02:19:20 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v277: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:20 np0005558317 podman[108587]: 2025-12-13 07:19:20.352981451 +0000 UTC m=+0.028892939 container create 45d1351cf9bd7a6aebf6ebde9c3026028f545b83b9fd19a92f0e05921b000f70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_easley, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:19:20 np0005558317 systemd[1]: Started libpod-conmon-45d1351cf9bd7a6aebf6ebde9c3026028f545b83b9fd19a92f0e05921b000f70.scope.
Dec 13 02:19:20 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:19:20 np0005558317 podman[108587]: 2025-12-13 07:19:20.406940897 +0000 UTC m=+0.082852395 container init 45d1351cf9bd7a6aebf6ebde9c3026028f545b83b9fd19a92f0e05921b000f70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 02:19:20 np0005558317 podman[108587]: 2025-12-13 07:19:20.411421096 +0000 UTC m=+0.087332574 container start 45d1351cf9bd7a6aebf6ebde9c3026028f545b83b9fd19a92f0e05921b000f70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_easley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 02:19:20 np0005558317 podman[108587]: 2025-12-13 07:19:20.413465687 +0000 UTC m=+0.089377164 container attach 45d1351cf9bd7a6aebf6ebde9c3026028f545b83b9fd19a92f0e05921b000f70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_easley, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:19:20 np0005558317 suspicious_easley[108600]: 167 167
Dec 13 02:19:20 np0005558317 python3.9[108577]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 13 02:19:20 np0005558317 systemd[1]: libpod-45d1351cf9bd7a6aebf6ebde9c3026028f545b83b9fd19a92f0e05921b000f70.scope: Deactivated successfully.
Dec 13 02:19:20 np0005558317 podman[108587]: 2025-12-13 07:19:20.416320954 +0000 UTC m=+0.092232433 container died 45d1351cf9bd7a6aebf6ebde9c3026028f545b83b9fd19a92f0e05921b000f70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_easley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:19:20 np0005558317 systemd[1]: var-lib-containers-storage-overlay-7926541ee0a6a86053c0887cc061dd8b57d189504e34ff199321459096afecbf-merged.mount: Deactivated successfully.
Dec 13 02:19:20 np0005558317 podman[108587]: 2025-12-13 07:19:20.436350919 +0000 UTC m=+0.112262397 container remove 45d1351cf9bd7a6aebf6ebde9c3026028f545b83b9fd19a92f0e05921b000f70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_easley, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:19:20 np0005558317 podman[108587]: 2025-12-13 07:19:20.341501088 +0000 UTC m=+0.017412586 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:19:20 np0005558317 systemd[1]: libpod-conmon-45d1351cf9bd7a6aebf6ebde9c3026028f545b83b9fd19a92f0e05921b000f70.scope: Deactivated successfully.
Dec 13 02:19:20 np0005558317 podman[108646]: 2025-12-13 07:19:20.548846764 +0000 UTC m=+0.028061272 container create 7b5a14aa1de76fb1bfe76a92735f891d19e1474009d6b4aed1261429ca0c6180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_banach, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 02:19:20 np0005558317 systemd[1]: Started libpod-conmon-7b5a14aa1de76fb1bfe76a92735f891d19e1474009d6b4aed1261429ca0c6180.scope.
Dec 13 02:19:20 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:19:20 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9038cf6c21dadbf088a664290d4b8fd0a6e1cdddbdeae55ed126fa74c7032e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:19:20 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9038cf6c21dadbf088a664290d4b8fd0a6e1cdddbdeae55ed126fa74c7032e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:19:20 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9038cf6c21dadbf088a664290d4b8fd0a6e1cdddbdeae55ed126fa74c7032e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:19:20 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9038cf6c21dadbf088a664290d4b8fd0a6e1cdddbdeae55ed126fa74c7032e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:19:20 np0005558317 podman[108646]: 2025-12-13 07:19:20.60415452 +0000 UTC m=+0.083369048 container init 7b5a14aa1de76fb1bfe76a92735f891d19e1474009d6b4aed1261429ca0c6180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 02:19:20 np0005558317 podman[108646]: 2025-12-13 07:19:20.610533335 +0000 UTC m=+0.089747843 container start 7b5a14aa1de76fb1bfe76a92735f891d19e1474009d6b4aed1261429ca0c6180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 02:19:20 np0005558317 podman[108646]: 2025-12-13 07:19:20.611674864 +0000 UTC m=+0.090889373 container attach 7b5a14aa1de76fb1bfe76a92735f891d19e1474009d6b4aed1261429ca0c6180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_banach, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 02:19:20 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Dec 13 02:19:20 np0005558317 podman[108646]: 2025-12-13 07:19:20.536787812 +0000 UTC m=+0.016002340 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:19:20 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Dec 13 02:19:20 np0005558317 python3.9[108799]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:19:21 np0005558317 lvm[108941]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:19:21 np0005558317 lvm[108936]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:19:21 np0005558317 lvm[108941]: VG ceph_vg1 finished
Dec 13 02:19:21 np0005558317 lvm[108945]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:19:21 np0005558317 lvm[108945]: VG ceph_vg2 finished
Dec 13 02:19:21 np0005558317 lvm[108936]: VG ceph_vg0 finished
Dec 13 02:19:21 np0005558317 trusting_banach[108697]: {}
Dec 13 02:19:21 np0005558317 lvm[108950]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:19:21 np0005558317 lvm[108950]: VG ceph_vg0 finished
Dec 13 02:19:21 np0005558317 podman[108646]: 2025-12-13 07:19:21.234106375 +0000 UTC m=+0.713320883 container died 7b5a14aa1de76fb1bfe76a92735f891d19e1474009d6b4aed1261429ca0c6180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_banach, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:19:21 np0005558317 systemd[1]: libpod-7b5a14aa1de76fb1bfe76a92735f891d19e1474009d6b4aed1261429ca0c6180.scope: Deactivated successfully.
Dec 13 02:19:21 np0005558317 systemd[1]: var-lib-containers-storage-overlay-d9038cf6c21dadbf088a664290d4b8fd0a6e1cdddbdeae55ed126fa74c7032e0-merged.mount: Deactivated successfully.
Dec 13 02:19:21 np0005558317 podman[108646]: 2025-12-13 07:19:21.266055999 +0000 UTC m=+0.745270508 container remove 7b5a14aa1de76fb1bfe76a92735f891d19e1474009d6b4aed1261429ca0c6180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_banach, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:19:21 np0005558317 systemd[1]: libpod-conmon-7b5a14aa1de76fb1bfe76a92735f891d19e1474009d6b4aed1261429ca0c6180.scope: Deactivated successfully.
Dec 13 02:19:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:19:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:19:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:19:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:19:21 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Dec 13 02:19:21 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Dec 13 02:19:21 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:19:21 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:19:21 np0005558317 python3.9[109059]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:19:22 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v278: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:19:23 np0005558317 python3.9[109212]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:19:24 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v279: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:24 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Dec 13 02:19:24 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Dec 13 02:19:24 np0005558317 python3.9[109365]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:19:25 np0005558317 python3.9[109519]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Dec 13 02:19:25 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Dec 13 02:19:25 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Dec 13 02:19:25 np0005558317 systemd[1]: session-38.scope: Deactivated successfully.
Dec 13 02:19:25 np0005558317 systemd[1]: session-38.scope: Consumed 13.035s CPU time.
Dec 13 02:19:25 np0005558317 systemd-logind[745]: Session 38 logged out. Waiting for processes to exit.
Dec 13 02:19:25 np0005558317 systemd-logind[745]: Removed session 38.
Dec 13 02:19:26 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v280: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:27 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Dec 13 02:19:27 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Dec 13 02:19:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:19:28 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v281: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:30 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v282: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:30 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Dec 13 02:19:30 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Dec 13 02:19:31 np0005558317 systemd-logind[745]: New session 39 of user zuul.
Dec 13 02:19:31 np0005558317 systemd[1]: Started Session 39 of User zuul.
Dec 13 02:19:31 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Dec 13 02:19:31 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Dec 13 02:19:31 np0005558317 python3.9[109697]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:19:32 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v283: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:32 np0005558317 python3.9[109851]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:19:32 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Dec 13 02:19:32 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Dec 13 02:19:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:19:33 np0005558317 python3.9[110044]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:19:33 np0005558317 systemd[1]: session-39.scope: Deactivated successfully.
Dec 13 02:19:33 np0005558317 systemd[1]: session-39.scope: Consumed 1.638s CPU time.
Dec 13 02:19:33 np0005558317 systemd-logind[745]: Session 39 logged out. Waiting for processes to exit.
Dec 13 02:19:33 np0005558317 systemd-logind[745]: Removed session 39.
Dec 13 02:19:34 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v284: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:35 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Dec 13 02:19:35 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Dec 13 02:19:35 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.d scrub starts
Dec 13 02:19:35 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.d scrub ok
Dec 13 02:19:36 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v285: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:36 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Dec 13 02:19:36 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Dec 13 02:19:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:19:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:19:38
Dec 13 02:19:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:19:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 02:19:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'default.rgw.control', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log']
Dec 13 02:19:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:19:38 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v286: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:38 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Dec 13 02:19:38 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Dec 13 02:19:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:19:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:19:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:19:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:19:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:19:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:19:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:19:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:19:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:19:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:19:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:19:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:19:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:19:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:19:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:19:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:19:39 np0005558317 systemd-logind[745]: New session 40 of user zuul.
Dec 13 02:19:39 np0005558317 systemd[1]: Started Session 40 of User zuul.
Dec 13 02:19:39 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Dec 13 02:19:39 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Dec 13 02:19:40 np0005558317 python3.9[110223]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:19:40 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v287: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:40 np0005558317 python3.9[110377]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:19:41 np0005558317 python3.9[110533]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:19:42 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v288: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:42 np0005558317 python3.9[110617]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:19:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:19:43 np0005558317 python3.9[110770]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:19:44 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v289: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:44 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Dec 13 02:19:44 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Dec 13 02:19:44 np0005558317 python3.9[110965]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:19:45 np0005558317 python3.9[111117]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:19:46 np0005558317 python3.9[111279]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:19:46 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v290: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:46 np0005558317 python3.9[111357]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:19:46 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.f scrub starts
Dec 13 02:19:46 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.f scrub ok
Dec 13 02:19:46 np0005558317 python3.9[111509]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:19:47 np0005558317 python3.9[111587]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:19:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:19:47 np0005558317 python3.9[111739]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:19:48 np0005558317 python3.9[111891]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:19:48 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v291: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:19:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:19:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:19:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:19:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:19:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:19:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:19:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:19:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:19:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:19:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:19:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:19:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 02:19:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:19:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:19:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:19:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:19:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:19:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:19:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:19:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:19:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:19:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:19:48 np0005558317 python3.9[112043]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:19:49 np0005558317 python3.9[112195]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:19:49 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Dec 13 02:19:49 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Dec 13 02:19:49 np0005558317 python3.9[112347]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:19:50 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v292: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:50 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.b scrub starts
Dec 13 02:19:50 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.b scrub ok
Dec 13 02:19:50 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Dec 13 02:19:50 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Dec 13 02:19:51 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Dec 13 02:19:51 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Dec 13 02:19:51 np0005558317 python3.9[112500]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:19:51 np0005558317 python3.9[112654]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:19:52 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v293: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:52 np0005558317 python3.9[112806]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:19:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:19:53 np0005558317 python3.9[112958]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:19:53 np0005558317 python3.9[113111]: ansible-service_facts Invoked
Dec 13 02:19:53 np0005558317 network[113128]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 02:19:53 np0005558317 network[113129]: 'network-scripts' will be removed from distribution in near future.
Dec 13 02:19:53 np0005558317 network[113130]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 02:19:54 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v294: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:55 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Dec 13 02:19:55 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Dec 13 02:19:56 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v295: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:19:56.530942) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610396531019, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7257, "num_deletes": 251, "total_data_size": 9856231, "memory_usage": 10017920, "flush_reason": "Manual Compaction"}
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610396544647, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7723937, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 126, "largest_seqno": 7380, "table_properties": {"data_size": 7697472, "index_size": 17070, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 77106, "raw_average_key_size": 23, "raw_value_size": 7634415, "raw_average_value_size": 2308, "num_data_blocks": 750, "num_entries": 3307, "num_filter_entries": 3307, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610002, "oldest_key_time": 1765610002, "file_creation_time": 1765610396, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 13733 microseconds, and 11146 cpu microseconds.
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:19:56.544679) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7723937 bytes OK
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:19:56.544694) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:19:56.545784) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:19:56.545817) EVENT_LOG_v1 {"time_micros": 1765610396545810, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:19:56.545846) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9824501, prev total WAL file size 9824501, number of live WAL files 2.
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:19:56.547886) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7542KB) 13(47KB) 8(1944B)]
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610396547954, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7774062, "oldest_snapshot_seqno": -1}
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3118 keys, 7732963 bytes, temperature: kUnknown
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610396563824, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7732963, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7707034, "index_size": 17051, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7813, "raw_key_size": 75132, "raw_average_key_size": 24, "raw_value_size": 7645568, "raw_average_value_size": 2452, "num_data_blocks": 751, "num_entries": 3118, "num_filter_entries": 3118, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610001, "oldest_key_time": 0, "file_creation_time": 1765610396, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:19:56.563977) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7732963 bytes
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:19:56.564278) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 488.3 rd, 485.7 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.4, 0.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3406, records dropped: 288 output_compression: NoCompression
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:19:56.564292) EVENT_LOG_v1 {"time_micros": 1765610396564285, "job": 4, "event": "compaction_finished", "compaction_time_micros": 15920, "compaction_time_cpu_micros": 12875, "output_level": 6, "num_output_files": 1, "total_output_size": 7732963, "num_input_records": 3406, "num_output_records": 3118, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610396565385, "job": 4, "event": "table_file_deletion", "file_number": 19}
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610396565451, "job": 4, "event": "table_file_deletion", "file_number": 13}
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610396565482, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec 13 02:19:56 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:19:56.547823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:19:57 np0005558317 python3.9[113583]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:19:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:19:58 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Dec 13 02:19:58 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Dec 13 02:19:58 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v296: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:19:58 np0005558317 python3.9[113736]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 13 02:19:59 np0005558317 python3.9[113888]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:20:00 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v297: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:00 np0005558317 python3.9[113966]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:00 np0005558317 python3.9[114118]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:20:01 np0005558317 python3.9[114196]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:02 np0005558317 python3.9[114348]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:02 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Dec 13 02:20:02 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Dec 13 02:20:02 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v298: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:20:02 np0005558317 python3.9[114500]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:20:03 np0005558317 python3.9[114584]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:20:04 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v299: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:04 np0005558317 systemd[1]: session-40.scope: Deactivated successfully.
Dec 13 02:20:04 np0005558317 systemd[1]: session-40.scope: Consumed 16.811s CPU time.
Dec 13 02:20:04 np0005558317 systemd-logind[745]: Session 40 logged out. Waiting for processes to exit.
Dec 13 02:20:04 np0005558317 systemd-logind[745]: Removed session 40.
Dec 13 02:20:06 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Dec 13 02:20:06 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Dec 13 02:20:06 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v300: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:20:08 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Dec 13 02:20:08 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Dec 13 02:20:08 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v301: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:20:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:20:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:20:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:20:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:20:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:20:09 np0005558317 systemd-logind[745]: New session 41 of user zuul.
Dec 13 02:20:09 np0005558317 systemd[1]: Started Session 41 of User zuul.
Dec 13 02:20:10 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v302: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:10 np0005558317 python3.9[114767]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:11 np0005558317 python3.9[114919]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:20:11 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.a scrub starts
Dec 13 02:20:11 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.a scrub ok
Dec 13 02:20:11 np0005558317 python3.9[114997]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:11 np0005558317 systemd-logind[745]: Session 41 logged out. Waiting for processes to exit.
Dec 13 02:20:11 np0005558317 systemd[1]: session-41.scope: Deactivated successfully.
Dec 13 02:20:11 np0005558317 systemd[1]: session-41.scope: Consumed 1.239s CPU time.
Dec 13 02:20:11 np0005558317 systemd-logind[745]: Removed session 41.
Dec 13 02:20:12 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v303: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:12 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Dec 13 02:20:12 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Dec 13 02:20:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:20:13 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Dec 13 02:20:13 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Dec 13 02:20:14 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v304: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:14 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.b scrub starts
Dec 13 02:20:14 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.b scrub ok
Dec 13 02:20:16 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v305: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:16 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Dec 13 02:20:16 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Dec 13 02:20:16 np0005558317 systemd-logind[745]: New session 42 of user zuul.
Dec 13 02:20:16 np0005558317 systemd[1]: Started Session 42 of User zuul.
Dec 13 02:20:17 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Dec 13 02:20:17 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Dec 13 02:20:17 np0005558317 python3.9[115175]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:20:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:20:18 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Dec 13 02:20:18 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v306: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:18 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Dec 13 02:20:18 np0005558317 python3.9[115331]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:18 np0005558317 python3.9[115506]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:20:19 np0005558317 python3.9[115584]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.nsewnowd recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:19 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Dec 13 02:20:19 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Dec 13 02:20:19 np0005558317 python3.9[115736]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:20:20 np0005558317 python3.9[115814]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.z69g7aew recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:20 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v307: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:20 np0005558317 python3.9[115966]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:20:21 np0005558317 python3.9[116118]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:20:21 np0005558317 python3.9[116213]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:20:21 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Dec 13 02:20:21 np0005558317 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Dec 13 02:20:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:20:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:20:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:20:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:20:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:20:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:20:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:20:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:20:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:20:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:20:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:20:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:20:21 np0005558317 python3.9[116427]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:20:22 np0005558317 podman[116541]: 2025-12-13 07:20:22.164212834 +0000 UTC m=+0.032276287 container create 0d53a71348fd9eb37859951110804d823c191b35931eb5fc94b5fe93b1d627fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 02:20:22 np0005558317 systemd[1]: Started libpod-conmon-0d53a71348fd9eb37859951110804d823c191b35931eb5fc94b5fe93b1d627fd.scope.
Dec 13 02:20:22 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:20:22 np0005558317 podman[116541]: 2025-12-13 07:20:22.220605291 +0000 UTC m=+0.088668764 container init 0d53a71348fd9eb37859951110804d823c191b35931eb5fc94b5fe93b1d627fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 02:20:22 np0005558317 podman[116541]: 2025-12-13 07:20:22.226085188 +0000 UTC m=+0.094148641 container start 0d53a71348fd9eb37859951110804d823c191b35931eb5fc94b5fe93b1d627fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 02:20:22 np0005558317 podman[116541]: 2025-12-13 07:20:22.227470802 +0000 UTC m=+0.095534255 container attach 0d53a71348fd9eb37859951110804d823c191b35931eb5fc94b5fe93b1d627fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 02:20:22 np0005558317 stupefied_wilson[116579]: 167 167
Dec 13 02:20:22 np0005558317 systemd[1]: libpod-0d53a71348fd9eb37859951110804d823c191b35931eb5fc94b5fe93b1d627fd.scope: Deactivated successfully.
Dec 13 02:20:22 np0005558317 conmon[116579]: conmon 0d53a71348fd9eb37859 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d53a71348fd9eb37859951110804d823c191b35931eb5fc94b5fe93b1d627fd.scope/container/memory.events
Dec 13 02:20:22 np0005558317 podman[116541]: 2025-12-13 07:20:22.231231706 +0000 UTC m=+0.099295159 container died 0d53a71348fd9eb37859951110804d823c191b35931eb5fc94b5fe93b1d627fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:20:22 np0005558317 systemd[1]: var-lib-containers-storage-overlay-4406dc2f3f8b1530d13c68d0597c3c83d4feb7a6b3e770a0b1b7e7d6862b7e18-merged.mount: Deactivated successfully.
Dec 13 02:20:22 np0005558317 podman[116541]: 2025-12-13 07:20:22.15218784 +0000 UTC m=+0.020251313 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:20:22 np0005558317 podman[116541]: 2025-12-13 07:20:22.252154304 +0000 UTC m=+0.120217757 container remove 0d53a71348fd9eb37859951110804d823c191b35931eb5fc94b5fe93b1d627fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilson, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 02:20:22 np0005558317 systemd[1]: libpod-conmon-0d53a71348fd9eb37859951110804d823c191b35931eb5fc94b5fe93b1d627fd.scope: Deactivated successfully.
Dec 13 02:20:22 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v308: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:22 np0005558317 python3.9[116575]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:20:22 np0005558317 podman[116601]: 2025-12-13 07:20:22.36550129 +0000 UTC m=+0.027614212 container create e95f2efaa55ef3c4e96c20a022eff263c555bb241559f16c872d7f62d24197e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goodall, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:20:22 np0005558317 systemd[1]: Started libpod-conmon-e95f2efaa55ef3c4e96c20a022eff263c555bb241559f16c872d7f62d24197e9.scope.
Dec 13 02:20:22 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:20:22 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74ce3842cae9bb91cc4f6370819e61bba4565a677c3f12798e62185343b55d1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:20:22 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74ce3842cae9bb91cc4f6370819e61bba4565a677c3f12798e62185343b55d1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:20:22 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74ce3842cae9bb91cc4f6370819e61bba4565a677c3f12798e62185343b55d1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:20:22 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74ce3842cae9bb91cc4f6370819e61bba4565a677c3f12798e62185343b55d1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:20:22 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74ce3842cae9bb91cc4f6370819e61bba4565a677c3f12798e62185343b55d1c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:20:22 np0005558317 podman[116601]: 2025-12-13 07:20:22.42477415 +0000 UTC m=+0.086887093 container init e95f2efaa55ef3c4e96c20a022eff263c555bb241559f16c872d7f62d24197e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goodall, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:20:22 np0005558317 podman[116601]: 2025-12-13 07:20:22.43139573 +0000 UTC m=+0.093508654 container start e95f2efaa55ef3c4e96c20a022eff263c555bb241559f16c872d7f62d24197e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goodall, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 02:20:22 np0005558317 podman[116601]: 2025-12-13 07:20:22.432673651 +0000 UTC m=+0.094786594 container attach e95f2efaa55ef3c4e96c20a022eff263c555bb241559f16c872d7f62d24197e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 02:20:22 np0005558317 podman[116601]: 2025-12-13 07:20:22.354371394 +0000 UTC m=+0.016484337 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:20:22 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:20:22 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:20:22 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:20:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:20:22 np0005558317 vigorous_goodall[116622]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:20:22 np0005558317 vigorous_goodall[116622]: --> All data devices are unavailable
Dec 13 02:20:22 np0005558317 systemd[1]: libpod-e95f2efaa55ef3c4e96c20a022eff263c555bb241559f16c872d7f62d24197e9.scope: Deactivated successfully.
Dec 13 02:20:22 np0005558317 podman[116601]: 2025-12-13 07:20:22.78288355 +0000 UTC m=+0.444996483 container died e95f2efaa55ef3c4e96c20a022eff263c555bb241559f16c872d7f62d24197e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goodall, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 02:20:22 np0005558317 systemd[1]: var-lib-containers-storage-overlay-74ce3842cae9bb91cc4f6370819e61bba4565a677c3f12798e62185343b55d1c-merged.mount: Deactivated successfully.
Dec 13 02:20:22 np0005558317 podman[116601]: 2025-12-13 07:20:22.808860452 +0000 UTC m=+0.470973375 container remove e95f2efaa55ef3c4e96c20a022eff263c555bb241559f16c872d7f62d24197e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goodall, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 02:20:22 np0005558317 python3.9[116778]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:22 np0005558317 systemd[1]: libpod-conmon-e95f2efaa55ef3c4e96c20a022eff263c555bb241559f16c872d7f62d24197e9.scope: Deactivated successfully.
Dec 13 02:20:23 np0005558317 podman[116980]: 2025-12-13 07:20:23.146482804 +0000 UTC m=+0.029413886 container create e3fe181cd7228b4b333b516f750ca1b7d75c99be2dd7367bf49c6cbe7bc6d8d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:20:23 np0005558317 systemd[1]: Started libpod-conmon-e3fe181cd7228b4b333b516f750ca1b7d75c99be2dd7367bf49c6cbe7bc6d8d2.scope.
Dec 13 02:20:23 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:20:23 np0005558317 podman[116980]: 2025-12-13 07:20:23.200989655 +0000 UTC m=+0.083920757 container init e3fe181cd7228b4b333b516f750ca1b7d75c99be2dd7367bf49c6cbe7bc6d8d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bhaskara, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:20:23 np0005558317 podman[116980]: 2025-12-13 07:20:23.205601004 +0000 UTC m=+0.088532096 container start e3fe181cd7228b4b333b516f750ca1b7d75c99be2dd7367bf49c6cbe7bc6d8d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bhaskara, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 02:20:23 np0005558317 podman[116980]: 2025-12-13 07:20:23.207032683 +0000 UTC m=+0.089963775 container attach e3fe181cd7228b4b333b516f750ca1b7d75c99be2dd7367bf49c6cbe7bc6d8d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bhaskara, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:20:23 np0005558317 laughing_bhaskara[117022]: 167 167
Dec 13 02:20:23 np0005558317 systemd[1]: libpod-e3fe181cd7228b4b333b516f750ca1b7d75c99be2dd7367bf49c6cbe7bc6d8d2.scope: Deactivated successfully.
Dec 13 02:20:23 np0005558317 podman[116980]: 2025-12-13 07:20:23.210233582 +0000 UTC m=+0.093164664 container died e3fe181cd7228b4b333b516f750ca1b7d75c99be2dd7367bf49c6cbe7bc6d8d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bhaskara, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:20:23 np0005558317 systemd[1]: var-lib-containers-storage-overlay-b74922e257878a8ed47770f394699bff145159dca6ff947d1dc20422ddf44b79-merged.mount: Deactivated successfully.
Dec 13 02:20:23 np0005558317 podman[116980]: 2025-12-13 07:20:23.230236425 +0000 UTC m=+0.113167507 container remove e3fe181cd7228b4b333b516f750ca1b7d75c99be2dd7367bf49c6cbe7bc6d8d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 02:20:23 np0005558317 podman[116980]: 2025-12-13 07:20:23.135052954 +0000 UTC m=+0.017984046 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:20:23 np0005558317 systemd[1]: libpod-conmon-e3fe181cd7228b4b333b516f750ca1b7d75c99be2dd7367bf49c6cbe7bc6d8d2.scope: Deactivated successfully.
Dec 13 02:20:23 np0005558317 python3.9[117024]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:20:23 np0005558317 podman[117044]: 2025-12-13 07:20:23.344694737 +0000 UTC m=+0.028005900 container create 2c98c968aa8d24a898164fc00c447eea8b5a4db74751ae83bcb3745f9fd4fd52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_albattani, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:20:23 np0005558317 systemd[1]: Started libpod-conmon-2c98c968aa8d24a898164fc00c447eea8b5a4db74751ae83bcb3745f9fd4fd52.scope.
Dec 13 02:20:23 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:20:23 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c90511c1fb0c9f8e4a075c8c8af697bed1aff4070ad1da070477e306e104b0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:20:23 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c90511c1fb0c9f8e4a075c8c8af697bed1aff4070ad1da070477e306e104b0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:20:23 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c90511c1fb0c9f8e4a075c8c8af697bed1aff4070ad1da070477e306e104b0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:20:23 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c90511c1fb0c9f8e4a075c8c8af697bed1aff4070ad1da070477e306e104b0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:20:23 np0005558317 podman[117044]: 2025-12-13 07:20:23.406636602 +0000 UTC m=+0.089947786 container init 2c98c968aa8d24a898164fc00c447eea8b5a4db74751ae83bcb3745f9fd4fd52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_albattani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Dec 13 02:20:23 np0005558317 podman[117044]: 2025-12-13 07:20:23.413091439 +0000 UTC m=+0.096402603 container start 2c98c968aa8d24a898164fc00c447eea8b5a4db74751ae83bcb3745f9fd4fd52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 02:20:23 np0005558317 podman[117044]: 2025-12-13 07:20:23.414899839 +0000 UTC m=+0.098211003 container attach 2c98c968aa8d24a898164fc00c447eea8b5a4db74751ae83bcb3745f9fd4fd52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_albattani, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:20:23 np0005558317 podman[117044]: 2025-12-13 07:20:23.333426 +0000 UTC m=+0.016737184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:20:23 np0005558317 funny_albattani[117059]: {
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:    "0": [
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:        {
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "devices": [
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "/dev/loop3"
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            ],
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "lv_name": "ceph_lv0",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "lv_size": "21470642176",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "name": "ceph_lv0",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "tags": {
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.cluster_name": "ceph",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.crush_device_class": "",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.encrypted": "0",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.objectstore": "bluestore",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.osd_id": "0",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.type": "block",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.vdo": "0",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.with_tpm": "0"
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            },
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "type": "block",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "vg_name": "ceph_vg0"
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:        }
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:    ],
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:    "1": [
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:        {
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "devices": [
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "/dev/loop4"
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            ],
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "lv_name": "ceph_lv1",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "lv_size": "21470642176",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "name": "ceph_lv1",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "tags": {
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.cluster_name": "ceph",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.crush_device_class": "",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.encrypted": "0",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.objectstore": "bluestore",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.osd_id": "1",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.type": "block",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.vdo": "0",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.with_tpm": "0"
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            },
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "type": "block",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "vg_name": "ceph_vg1"
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:        }
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:    ],
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:    "2": [
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:        {
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "devices": [
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "/dev/loop5"
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            ],
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "lv_name": "ceph_lv2",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "lv_size": "21470642176",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "name": "ceph_lv2",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "tags": {
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.cluster_name": "ceph",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.crush_device_class": "",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.encrypted": "0",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.objectstore": "bluestore",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.osd_id": "2",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.type": "block",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.vdo": "0",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:                "ceph.with_tpm": "0"
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            },
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "type": "block",
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:            "vg_name": "ceph_vg2"
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:        }
Dec 13 02:20:23 np0005558317 funny_albattani[117059]:    ]
Dec 13 02:20:23 np0005558317 funny_albattani[117059]: }
Dec 13 02:20:23 np0005558317 systemd[1]: libpod-2c98c968aa8d24a898164fc00c447eea8b5a4db74751ae83bcb3745f9fd4fd52.scope: Deactivated successfully.
Dec 13 02:20:23 np0005558317 podman[117044]: 2025-12-13 07:20:23.651666321 +0000 UTC m=+0.334977484 container died 2c98c968aa8d24a898164fc00c447eea8b5a4db74751ae83bcb3745f9fd4fd52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 02:20:23 np0005558317 systemd[1]: var-lib-containers-storage-overlay-5c90511c1fb0c9f8e4a075c8c8af697bed1aff4070ad1da070477e306e104b0f-merged.mount: Deactivated successfully.
Dec 13 02:20:23 np0005558317 podman[117044]: 2025-12-13 07:20:23.674339211 +0000 UTC m=+0.357650374 container remove 2c98c968aa8d24a898164fc00c447eea8b5a4db74751ae83bcb3745f9fd4fd52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_albattani, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 02:20:23 np0005558317 systemd[1]: libpod-conmon-2c98c968aa8d24a898164fc00c447eea8b5a4db74751ae83bcb3745f9fd4fd52.scope: Deactivated successfully.
Dec 13 02:20:23 np0005558317 python3.9[117139]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:24 np0005558317 podman[117347]: 2025-12-13 07:20:24.034517261 +0000 UTC m=+0.032380604 container create 345616dd7924107a0dc40779278d8e649fe980c1ac6eba21d1ce32afa4d3cd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mclean, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:20:24 np0005558317 systemd[1]: Started libpod-conmon-345616dd7924107a0dc40779278d8e649fe980c1ac6eba21d1ce32afa4d3cd1d.scope.
Dec 13 02:20:24 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:20:24 np0005558317 podman[117347]: 2025-12-13 07:20:24.087799892 +0000 UTC m=+0.085663245 container init 345616dd7924107a0dc40779278d8e649fe980c1ac6eba21d1ce32afa4d3cd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mclean, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Dec 13 02:20:24 np0005558317 podman[117347]: 2025-12-13 07:20:24.092507853 +0000 UTC m=+0.090371195 container start 345616dd7924107a0dc40779278d8e649fe980c1ac6eba21d1ce32afa4d3cd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mclean, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:20:24 np0005558317 podman[117347]: 2025-12-13 07:20:24.093636383 +0000 UTC m=+0.091499725 container attach 345616dd7924107a0dc40779278d8e649fe980c1ac6eba21d1ce32afa4d3cd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:20:24 np0005558317 goofy_mclean[117380]: 167 167
Dec 13 02:20:24 np0005558317 systemd[1]: libpod-345616dd7924107a0dc40779278d8e649fe980c1ac6eba21d1ce32afa4d3cd1d.scope: Deactivated successfully.
Dec 13 02:20:24 np0005558317 podman[117347]: 2025-12-13 07:20:24.096495546 +0000 UTC m=+0.094358898 container died 345616dd7924107a0dc40779278d8e649fe980c1ac6eba21d1ce32afa4d3cd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mclean, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:20:24 np0005558317 systemd[1]: var-lib-containers-storage-overlay-ce43c3aa87fc05cef6a04f151f9cd871aeb4a02455d11670c2a28e25ded39aea-merged.mount: Deactivated successfully.
Dec 13 02:20:24 np0005558317 podman[117347]: 2025-12-13 07:20:24.11469652 +0000 UTC m=+0.112559862 container remove 345616dd7924107a0dc40779278d8e649fe980c1ac6eba21d1ce32afa4d3cd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mclean, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:20:24 np0005558317 podman[117347]: 2025-12-13 07:20:24.021589084 +0000 UTC m=+0.019452446 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:20:24 np0005558317 systemd[1]: libpod-conmon-345616dd7924107a0dc40779278d8e649fe980c1ac6eba21d1ce32afa4d3cd1d.scope: Deactivated successfully.
Dec 13 02:20:24 np0005558317 python3.9[117374]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:20:24 np0005558317 podman[117404]: 2025-12-13 07:20:24.228199199 +0000 UTC m=+0.028717884 container create 64708052e515cd9c88f6d2009d264988e74a601058515385574400142a25a9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 02:20:24 np0005558317 systemd[1]: Started libpod-conmon-64708052e515cd9c88f6d2009d264988e74a601058515385574400142a25a9f3.scope.
Dec 13 02:20:24 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:20:24 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b6b528489ee78af0f63c35341a9a420c7411a5875eb22ce8b7235f4025ab9d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:20:24 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b6b528489ee78af0f63c35341a9a420c7411a5875eb22ce8b7235f4025ab9d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:20:24 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b6b528489ee78af0f63c35341a9a420c7411a5875eb22ce8b7235f4025ab9d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:20:24 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b6b528489ee78af0f63c35341a9a420c7411a5875eb22ce8b7235f4025ab9d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:20:24 np0005558317 podman[117404]: 2025-12-13 07:20:24.274379602 +0000 UTC m=+0.074898307 container init 64708052e515cd9c88f6d2009d264988e74a601058515385574400142a25a9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lalande, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:20:24 np0005558317 podman[117404]: 2025-12-13 07:20:24.280509646 +0000 UTC m=+0.081028331 container start 64708052e515cd9c88f6d2009d264988e74a601058515385574400142a25a9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lalande, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:20:24 np0005558317 podman[117404]: 2025-12-13 07:20:24.283774325 +0000 UTC m=+0.084293020 container attach 64708052e515cd9c88f6d2009d264988e74a601058515385574400142a25a9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lalande, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 02:20:24 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v309: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:24 np0005558317 podman[117404]: 2025-12-13 07:20:24.216487375 +0000 UTC m=+0.017006081 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:20:24 np0005558317 python3.9[117498]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:24 np0005558317 lvm[117645]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:20:24 np0005558317 lvm[117645]: VG ceph_vg0 finished
Dec 13 02:20:24 np0005558317 lvm[117648]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:20:24 np0005558317 lvm[117648]: VG ceph_vg1 finished
Dec 13 02:20:24 np0005558317 lvm[117651]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:20:24 np0005558317 lvm[117651]: VG ceph_vg2 finished
Dec 13 02:20:24 np0005558317 quirky_lalande[117438]: {}
Dec 13 02:20:24 np0005558317 systemd[1]: libpod-64708052e515cd9c88f6d2009d264988e74a601058515385574400142a25a9f3.scope: Deactivated successfully.
Dec 13 02:20:24 np0005558317 podman[117404]: 2025-12-13 07:20:24.914725886 +0000 UTC m=+0.715244571 container died 64708052e515cd9c88f6d2009d264988e74a601058515385574400142a25a9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lalande, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 02:20:24 np0005558317 systemd[1]: var-lib-containers-storage-overlay-0b6b528489ee78af0f63c35341a9a420c7411a5875eb22ce8b7235f4025ab9d0-merged.mount: Deactivated successfully.
Dec 13 02:20:24 np0005558317 podman[117404]: 2025-12-13 07:20:24.938732722 +0000 UTC m=+0.739251407 container remove 64708052e515cd9c88f6d2009d264988e74a601058515385574400142a25a9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 02:20:24 np0005558317 systemd[1]: libpod-conmon-64708052e515cd9c88f6d2009d264988e74a601058515385574400142a25a9f3.scope: Deactivated successfully.
Dec 13 02:20:24 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:20:24 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:20:24 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:20:24 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:20:25 np0005558317 python3.9[117763]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:20:25 np0005558317 systemd[1]: Reloading.
Dec 13 02:20:25 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:20:25 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:20:25 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:20:25 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:20:26 np0005558317 python3.9[117954]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:20:26 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v310: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:26 np0005558317 python3.9[118032]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:26 np0005558317 python3.9[118184]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:20:27 np0005558317 python3.9[118262]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:20:27 np0005558317 python3.9[118414]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:20:27 np0005558317 systemd[1]: Reloading.
Dec 13 02:20:27 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:20:27 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:20:28 np0005558317 systemd[1]: Starting Create netns directory...
Dec 13 02:20:28 np0005558317 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 13 02:20:28 np0005558317 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 13 02:20:28 np0005558317 systemd[1]: Finished Create netns directory.
Dec 13 02:20:28 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v311: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:28 np0005558317 python3.9[118605]: ansible-ansible.builtin.service_facts Invoked
Dec 13 02:20:28 np0005558317 network[118622]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 02:20:28 np0005558317 network[118623]: 'network-scripts' will be removed from distribution in near future.
Dec 13 02:20:28 np0005558317 network[118624]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 02:20:30 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v312: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:31 np0005558317 python3.9[118886]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:20:31 np0005558317 python3.9[118964]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:32 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v313: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:32 np0005558317 python3.9[119116]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:20:33 np0005558317 python3.9[119268]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:20:33 np0005558317 python3.9[119346]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:34 np0005558317 python3.9[119498]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 13 02:20:34 np0005558317 systemd[1]: Starting Time & Date Service...
Dec 13 02:20:34 np0005558317 systemd[1]: Started Time & Date Service.
Dec 13 02:20:34 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v314: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:34 np0005558317 python3.9[119654]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:35 np0005558317 python3.9[119806]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:20:35 np0005558317 python3.9[119884]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:36 np0005558317 python3.9[120036]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:20:36 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v315: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:36 np0005558317 python3.9[120114]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=._xxem502 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:36 np0005558317 python3.9[120266]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:20:37 np0005558317 python3.9[120344]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:20:37 np0005558317 python3.9[120496]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:20:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:20:38
Dec 13 02:20:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:20:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 02:20:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] pools ['volumes', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'vms', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', '.mgr']
Dec 13 02:20:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:20:38 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v316: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:38 np0005558317 python3[120649]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 13 02:20:39 np0005558317 python3.9[120801]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:20:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:20:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:20:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:20:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:20:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:20:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:20:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:20:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:20:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:20:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:20:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:20:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:20:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:20:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:20:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:20:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:20:39 np0005558317 python3.9[120879]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:39 np0005558317 python3.9[121031]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:20:40 np0005558317 python3.9[121109]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:40 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v317: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:40 np0005558317 python3.9[121261]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:20:41 np0005558317 python3.9[121339]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:41 np0005558317 python3.9[121491]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:20:41 np0005558317 python3.9[121569]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:42 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v318: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:42 np0005558317 python3.9[121721]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:20:42 np0005558317 systemd[1]: session-17.scope: Deactivated successfully.
Dec 13 02:20:42 np0005558317 systemd[1]: session-17.scope: Consumed 1min 11.719s CPU time.
Dec 13 02:20:42 np0005558317 systemd-logind[745]: Session 17 logged out. Waiting for processes to exit.
Dec 13 02:20:42 np0005558317 systemd-logind[745]: Removed session 17.
Dec 13 02:20:42 np0005558317 python3.9[121799]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:20:43 np0005558317 python3.9[121951]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:20:43 np0005558317 python3.9[122106]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:44 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v319: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:44 np0005558317 python3.9[122258]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:44 np0005558317 python3.9[122410]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:45 np0005558317 python3.9[122562]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 13 02:20:46 np0005558317 python3.9[122714]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 13 02:20:46 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v320: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:46 np0005558317 systemd-logind[745]: Session 42 logged out. Waiting for processes to exit.
Dec 13 02:20:46 np0005558317 systemd[1]: session-42.scope: Deactivated successfully.
Dec 13 02:20:46 np0005558317 systemd[1]: session-42.scope: Consumed 21.283s CPU time.
Dec 13 02:20:46 np0005558317 systemd-logind[745]: Removed session 42.
Dec 13 02:20:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:20:48 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v321: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:20:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:20:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:20:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:20:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:20:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:20:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:20:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:20:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:20:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:20:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:20:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:20:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 02:20:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:20:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:20:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:20:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:20:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:20:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:20:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:20:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:20:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:20:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:20:50 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v322: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:52 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v323: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:52 np0005558317 systemd-logind[745]: New session 43 of user zuul.
Dec 13 02:20:52 np0005558317 systemd[1]: Started Session 43 of User zuul.
Dec 13 02:20:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:20:52 np0005558317 python3.9[122894]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 13 02:20:53 np0005558317 python3.9[123046]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:20:54 np0005558317 python3.9[123200]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec 13 02:20:54 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v324: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:54 np0005558317 python3.9[123352]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.6xzeyhf5 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:20:55 np0005558317 python3.9[123477]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.6xzeyhf5 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610454.288385-44-123153932086596/.source.6xzeyhf5 _original_basename=.pm6pzats follow=False checksum=c1ddfbd914d066c629baab8f2d3e4b9f69b0d895 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:56 np0005558317 python3.9[123629]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:20:56 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v325: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:20:56 np0005558317 python3.9[123781]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCekpfjOZMQHu4kGkMmbnPcCtz1ykBu18rwwghFZ6JdZNeLGT0geVZzeGTxx67o32Xucl5rndeaEtZvZfxTXM1W/3Z9ig0x1tTtqK2lTLjxcw4+AxChtq8Mt1LZKUi2MHVUdDkB8UwKvPPC6k5NFQRBu1jsX63zDiUCudXQlFm49OLA8BZh7VuZYlpOMnuiPC9cWsSAehEH4hmIdqlyl7xhfBn/4IId10yPH4Bev4qk4z212G730uw0ldn9RfPP2Batr31zKwOCUveVL5V48yK6VIj2O4uztbh6yagWlbqPwmUoYdvokyMVmONCStsc8BDSSaTmH7gv6cm1tfpfpKJlBo25kpuVocNQaaZB8/x71weojzujWfYBPfwbGARRkq9lgjdmyLJot9XdtcDkAKNeE6nzDo29nj1SpYzDYu2OrwI8RN9TLEQyXyUi80L4ELrI2WrVf5NwIvfG0ZKHurHxEDYcJKris+z3lCdPHRbw/D0HAhFZ6YnnViCeqLe+XL0=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDlhQSLisbnaeA/5eqQ07vXPLvOWH+wLodInwcPHjCbq#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL+1SrJ/t+tkNcFtDd1R0f0/5owYzeRM7hR2TrpSEQtZk5y2BWR+htC7NOo7cYghMztLnyJaOIsNSp9NjO5UEBE=#012 create=True mode=0644 path=/tmp/ansible.6xzeyhf5 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:57 np0005558317 python3.9[123933]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.6xzeyhf5' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:20:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:20:57 np0005558317 python3.9[124087]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.6xzeyhf5 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:58 np0005558317 systemd-logind[745]: Session 43 logged out. Waiting for processes to exit.
Dec 13 02:20:58 np0005558317 systemd[1]: session-43.scope: Deactivated successfully.
Dec 13 02:20:58 np0005558317 systemd[1]: session-43.scope: Consumed 3.537s CPU time.
Dec 13 02:20:58 np0005558317 systemd-logind[745]: Removed session 43.
Dec 13 02:20:58 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v326: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:00 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v327: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:02 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v328: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:21:03 np0005558317 systemd-logind[745]: New session 44 of user zuul.
Dec 13 02:21:03 np0005558317 systemd[1]: Started Session 44 of User zuul.
Dec 13 02:21:04 np0005558317 python3.9[124265]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:21:04 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v329: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:04 np0005558317 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 13 02:21:05 np0005558317 python3.9[124423]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 13 02:21:05 np0005558317 python3.9[124577]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:21:06 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v330: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:06 np0005558317 python3.9[124730]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:21:06 np0005558317 python3.9[124883]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:21:07 np0005558317 python3.9[125035]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:21:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:21:08 np0005558317 systemd[1]: session-44.scope: Deactivated successfully.
Dec 13 02:21:08 np0005558317 systemd[1]: session-44.scope: Consumed 2.828s CPU time.
Dec 13 02:21:08 np0005558317 systemd-logind[745]: Session 44 logged out. Waiting for processes to exit.
Dec 13 02:21:08 np0005558317 systemd-logind[745]: Removed session 44.
Dec 13 02:21:08 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v331: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:21:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:21:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:21:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:21:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:21:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:21:10 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v332: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:12 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v333: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:21:13 np0005558317 systemd-logind[745]: New session 45 of user zuul.
Dec 13 02:21:13 np0005558317 systemd[1]: Started Session 45 of User zuul.
Dec 13 02:21:14 np0005558317 python3.9[125213]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:21:14 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v334: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:14 np0005558317 python3.9[125369]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:21:15 np0005558317 python3.9[125453]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 13 02:21:16 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v335: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:17 np0005558317 python3.9[125604]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:21:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:21:18 np0005558317 python3.9[125755]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 13 02:21:18 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v336: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:18 np0005558317 python3.9[125905]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:21:19 np0005558317 python3.9[126055]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:21:19 np0005558317 systemd[1]: session-45.scope: Deactivated successfully.
Dec 13 02:21:19 np0005558317 systemd[1]: session-45.scope: Consumed 4.216s CPU time.
Dec 13 02:21:19 np0005558317 systemd-logind[745]: Session 45 logged out. Waiting for processes to exit.
Dec 13 02:21:19 np0005558317 systemd-logind[745]: Removed session 45.
Dec 13 02:21:20 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v337: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:22 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v338: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:21:24 np0005558317 systemd-logind[745]: New session 46 of user zuul.
Dec 13 02:21:24 np0005558317 systemd[1]: Started Session 46 of User zuul.
Dec 13 02:21:24 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v339: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:25 np0005558317 python3.9[126233]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:21:25 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:21:25 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:21:25 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:21:25 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:21:25 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:21:25 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:21:25 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:21:25 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:21:25 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:21:25 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:21:25 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:21:25 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:21:25 np0005558317 podman[126401]: 2025-12-13 07:21:25.836033277 +0000 UTC m=+0.027889418 container create b01596e13d94ca6005a426b7f7b67576bc81993b4519e970ee667b96e1d79712 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 02:21:25 np0005558317 systemd[1]: Started libpod-conmon-b01596e13d94ca6005a426b7f7b67576bc81993b4519e970ee667b96e1d79712.scope.
Dec 13 02:21:25 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:21:25 np0005558317 podman[126401]: 2025-12-13 07:21:25.88463247 +0000 UTC m=+0.076488611 container init b01596e13d94ca6005a426b7f7b67576bc81993b4519e970ee667b96e1d79712 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tesla, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:21:25 np0005558317 podman[126401]: 2025-12-13 07:21:25.889853771 +0000 UTC m=+0.081709912 container start b01596e13d94ca6005a426b7f7b67576bc81993b4519e970ee667b96e1d79712 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 02:21:25 np0005558317 podman[126401]: 2025-12-13 07:21:25.891197656 +0000 UTC m=+0.083053797 container attach b01596e13d94ca6005a426b7f7b67576bc81993b4519e970ee667b96e1d79712 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tesla, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 02:21:25 np0005558317 vigilant_tesla[126416]: 167 167
Dec 13 02:21:25 np0005558317 systemd[1]: libpod-b01596e13d94ca6005a426b7f7b67576bc81993b4519e970ee667b96e1d79712.scope: Deactivated successfully.
Dec 13 02:21:25 np0005558317 podman[126401]: 2025-12-13 07:21:25.894074963 +0000 UTC m=+0.085931105 container died b01596e13d94ca6005a426b7f7b67576bc81993b4519e970ee667b96e1d79712 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:21:25 np0005558317 systemd[1]: var-lib-containers-storage-overlay-b0694aef238ad530bcf796fd1600cf539a8042cac15455a179fb4cda2060505d-merged.mount: Deactivated successfully.
Dec 13 02:21:25 np0005558317 podman[126401]: 2025-12-13 07:21:25.915096895 +0000 UTC m=+0.106953036 container remove b01596e13d94ca6005a426b7f7b67576bc81993b4519e970ee667b96e1d79712 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tesla, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 02:21:25 np0005558317 podman[126401]: 2025-12-13 07:21:25.824286073 +0000 UTC m=+0.016142224 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:21:25 np0005558317 systemd[1]: libpod-conmon-b01596e13d94ca6005a426b7f7b67576bc81993b4519e970ee667b96e1d79712.scope: Deactivated successfully.
Dec 13 02:21:26 np0005558317 podman[126489]: 2025-12-13 07:21:26.032949873 +0000 UTC m=+0.030269299 container create 9172b6a4b744ec68e78384ad61cf6ec0dcabc4926b8f8053da97a2284195b685 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_ptolemy, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 02:21:26 np0005558317 systemd[1]: Started libpod-conmon-9172b6a4b744ec68e78384ad61cf6ec0dcabc4926b8f8053da97a2284195b685.scope.
Dec 13 02:21:26 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:21:26 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a517465f8f451bfcf4f8f9e98d8181ee46c5e2e579b07b0778a4ee623902ba93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:21:26 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a517465f8f451bfcf4f8f9e98d8181ee46c5e2e579b07b0778a4ee623902ba93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:21:26 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a517465f8f451bfcf4f8f9e98d8181ee46c5e2e579b07b0778a4ee623902ba93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:21:26 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a517465f8f451bfcf4f8f9e98d8181ee46c5e2e579b07b0778a4ee623902ba93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:21:26 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a517465f8f451bfcf4f8f9e98d8181ee46c5e2e579b07b0778a4ee623902ba93/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:21:26 np0005558317 podman[126489]: 2025-12-13 07:21:26.096597634 +0000 UTC m=+0.093917071 container init 9172b6a4b744ec68e78384ad61cf6ec0dcabc4926b8f8053da97a2284195b685 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_ptolemy, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 02:21:26 np0005558317 podman[126489]: 2025-12-13 07:21:26.101933239 +0000 UTC m=+0.099252666 container start 9172b6a4b744ec68e78384ad61cf6ec0dcabc4926b8f8053da97a2284195b685 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_ptolemy, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:21:26 np0005558317 podman[126489]: 2025-12-13 07:21:26.103321528 +0000 UTC m=+0.100640975 container attach 9172b6a4b744ec68e78384ad61cf6ec0dcabc4926b8f8053da97a2284195b685 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_ptolemy, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:21:26 np0005558317 podman[126489]: 2025-12-13 07:21:26.019722147 +0000 UTC m=+0.017041584 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:21:26 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v340: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:26 np0005558317 python3.9[126583]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:21:26 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:21:26 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:21:26 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:21:26 np0005558317 sharp_ptolemy[126503]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:21:26 np0005558317 sharp_ptolemy[126503]: --> All data devices are unavailable
Dec 13 02:21:26 np0005558317 systemd[1]: libpod-9172b6a4b744ec68e78384ad61cf6ec0dcabc4926b8f8053da97a2284195b685.scope: Deactivated successfully.
Dec 13 02:21:26 np0005558317 podman[126489]: 2025-12-13 07:21:26.482160225 +0000 UTC m=+0.479479662 container died 9172b6a4b744ec68e78384ad61cf6ec0dcabc4926b8f8053da97a2284195b685 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_ptolemy, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:21:26 np0005558317 systemd[1]: var-lib-containers-storage-overlay-a517465f8f451bfcf4f8f9e98d8181ee46c5e2e579b07b0778a4ee623902ba93-merged.mount: Deactivated successfully.
Dec 13 02:21:26 np0005558317 podman[126489]: 2025-12-13 07:21:26.507299904 +0000 UTC m=+0.504619331 container remove 9172b6a4b744ec68e78384ad61cf6ec0dcabc4926b8f8053da97a2284195b685 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:21:26 np0005558317 systemd[1]: libpod-conmon-9172b6a4b744ec68e78384ad61cf6ec0dcabc4926b8f8053da97a2284195b685.scope: Deactivated successfully.
Dec 13 02:21:26 np0005558317 podman[126823]: 2025-12-13 07:21:26.850385567 +0000 UTC m=+0.027884680 container create 833ad7c1a28b24191b1c69188123a9dc770ed9c08e47448344a2fc18f2e7d3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_montalcini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 02:21:26 np0005558317 python3.9[126812]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:21:26 np0005558317 systemd[1]: Started libpod-conmon-833ad7c1a28b24191b1c69188123a9dc770ed9c08e47448344a2fc18f2e7d3d1.scope.
Dec 13 02:21:26 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:21:26 np0005558317 podman[126823]: 2025-12-13 07:21:26.900889248 +0000 UTC m=+0.078388370 container init 833ad7c1a28b24191b1c69188123a9dc770ed9c08e47448344a2fc18f2e7d3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_montalcini, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 02:21:26 np0005558317 podman[126823]: 2025-12-13 07:21:26.905800898 +0000 UTC m=+0.083300010 container start 833ad7c1a28b24191b1c69188123a9dc770ed9c08e47448344a2fc18f2e7d3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_montalcini, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:21:26 np0005558317 podman[126823]: 2025-12-13 07:21:26.907383562 +0000 UTC m=+0.084882694 container attach 833ad7c1a28b24191b1c69188123a9dc770ed9c08e47448344a2fc18f2e7d3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 02:21:26 np0005558317 relaxed_montalcini[126836]: 167 167
Dec 13 02:21:26 np0005558317 systemd[1]: libpod-833ad7c1a28b24191b1c69188123a9dc770ed9c08e47448344a2fc18f2e7d3d1.scope: Deactivated successfully.
Dec 13 02:21:26 np0005558317 podman[126823]: 2025-12-13 07:21:26.910210794 +0000 UTC m=+0.087709907 container died 833ad7c1a28b24191b1c69188123a9dc770ed9c08e47448344a2fc18f2e7d3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:21:26 np0005558317 systemd[1]: var-lib-containers-storage-overlay-d6373f59514ec3f52ac0d79d8699dfda1c73240468f8ac50378e68cc1684b61f-merged.mount: Deactivated successfully.
Dec 13 02:21:26 np0005558317 podman[126823]: 2025-12-13 07:21:26.93334176 +0000 UTC m=+0.110840872 container remove 833ad7c1a28b24191b1c69188123a9dc770ed9c08e47448344a2fc18f2e7d3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:21:26 np0005558317 podman[126823]: 2025-12-13 07:21:26.83935521 +0000 UTC m=+0.016854341 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:21:26 np0005558317 systemd[1]: libpod-conmon-833ad7c1a28b24191b1c69188123a9dc770ed9c08e47448344a2fc18f2e7d3d1.scope: Deactivated successfully.
Dec 13 02:21:27 np0005558317 podman[126890]: 2025-12-13 07:21:27.057761998 +0000 UTC m=+0.031668509 container create 31e92762343d4fbcf7e345806620e366df14950259f1b1cae269050f939c0ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_lumiere, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 02:21:27 np0005558317 systemd[1]: Started libpod-conmon-31e92762343d4fbcf7e345806620e366df14950259f1b1cae269050f939c0ead.scope.
Dec 13 02:21:27 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:21:27 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5e686ea281672ceb91a078844a7083cd2bdcad34bdcd16914994213179a1fe9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:21:27 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5e686ea281672ceb91a078844a7083cd2bdcad34bdcd16914994213179a1fe9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:21:27 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5e686ea281672ceb91a078844a7083cd2bdcad34bdcd16914994213179a1fe9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:21:27 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5e686ea281672ceb91a078844a7083cd2bdcad34bdcd16914994213179a1fe9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:21:27 np0005558317 podman[126890]: 2025-12-13 07:21:27.118665221 +0000 UTC m=+0.092571723 container init 31e92762343d4fbcf7e345806620e366df14950259f1b1cae269050f939c0ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 02:21:27 np0005558317 podman[126890]: 2025-12-13 07:21:27.124075408 +0000 UTC m=+0.097981908 container start 31e92762343d4fbcf7e345806620e366df14950259f1b1cae269050f939c0ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 02:21:27 np0005558317 podman[126890]: 2025-12-13 07:21:27.125195913 +0000 UTC m=+0.099102404 container attach 31e92762343d4fbcf7e345806620e366df14950259f1b1cae269050f939c0ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:21:27 np0005558317 podman[126890]: 2025-12-13 07:21:27.045025154 +0000 UTC m=+0.018931675 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]: {
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:    "0": [
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:        {
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "devices": [
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "/dev/loop3"
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            ],
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "lv_name": "ceph_lv0",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "lv_size": "21470642176",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "name": "ceph_lv0",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "tags": {
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.cluster_name": "ceph",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.crush_device_class": "",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.encrypted": "0",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.objectstore": "bluestore",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.osd_id": "0",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.type": "block",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.vdo": "0",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.with_tpm": "0"
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            },
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "type": "block",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "vg_name": "ceph_vg0"
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:        }
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:    ],
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:    "1": [
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:        {
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "devices": [
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "/dev/loop4"
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            ],
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "lv_name": "ceph_lv1",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "lv_size": "21470642176",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "name": "ceph_lv1",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "tags": {
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.cluster_name": "ceph",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.crush_device_class": "",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.encrypted": "0",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.objectstore": "bluestore",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.osd_id": "1",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.type": "block",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.vdo": "0",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.with_tpm": "0"
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            },
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "type": "block",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "vg_name": "ceph_vg1"
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:        }
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:    ],
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:    "2": [
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:        {
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "devices": [
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "/dev/loop5"
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            ],
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "lv_name": "ceph_lv2",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "lv_size": "21470642176",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "name": "ceph_lv2",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "tags": {
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.cluster_name": "ceph",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.crush_device_class": "",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.encrypted": "0",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.objectstore": "bluestore",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.osd_id": "2",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.type": "block",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.vdo": "0",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:                "ceph.with_tpm": "0"
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            },
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "type": "block",
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:            "vg_name": "ceph_vg2"
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:        }
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]:    ]
Dec 13 02:21:27 np0005558317 hardcore_lumiere[126947]: }
Dec 13 02:21:27 np0005558317 systemd[1]: libpod-31e92762343d4fbcf7e345806620e366df14950259f1b1cae269050f939c0ead.scope: Deactivated successfully.
Dec 13 02:21:27 np0005558317 podman[126890]: 2025-12-13 07:21:27.370914012 +0000 UTC m=+0.344820513 container died 31e92762343d4fbcf7e345806620e366df14950259f1b1cae269050f939c0ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_lumiere, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:21:27 np0005558317 systemd[1]: var-lib-containers-storage-overlay-e5e686ea281672ceb91a078844a7083cd2bdcad34bdcd16914994213179a1fe9-merged.mount: Deactivated successfully.
Dec 13 02:21:27 np0005558317 podman[126890]: 2025-12-13 07:21:27.393606073 +0000 UTC m=+0.367512574 container remove 31e92762343d4fbcf7e345806620e366df14950259f1b1cae269050f939c0ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_lumiere, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Dec 13 02:21:27 np0005558317 systemd[1]: libpod-conmon-31e92762343d4fbcf7e345806620e366df14950259f1b1cae269050f939c0ead.scope: Deactivated successfully.
Dec 13 02:21:27 np0005558317 python3.9[127039]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:21:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:21:27 np0005558317 podman[127148]: 2025-12-13 07:21:27.742866458 +0000 UTC m=+0.030084723 container create 5e22ec5deda52c7fe6fa044f11e4e10d68e93ea7c462d79300c65b7d43b90946 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 02:21:27 np0005558317 systemd[1]: Started libpod-conmon-5e22ec5deda52c7fe6fa044f11e4e10d68e93ea7c462d79300c65b7d43b90946.scope.
Dec 13 02:21:27 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:21:27 np0005558317 podman[127148]: 2025-12-13 07:21:27.794660113 +0000 UTC m=+0.081878378 container init 5e22ec5deda52c7fe6fa044f11e4e10d68e93ea7c462d79300c65b7d43b90946 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_pascal, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:21:27 np0005558317 podman[127148]: 2025-12-13 07:21:27.799920177 +0000 UTC m=+0.087138432 container start 5e22ec5deda52c7fe6fa044f11e4e10d68e93ea7c462d79300c65b7d43b90946 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_pascal, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 02:21:27 np0005558317 podman[127148]: 2025-12-13 07:21:27.801197508 +0000 UTC m=+0.088415763 container attach 5e22ec5deda52c7fe6fa044f11e4e10d68e93ea7c462d79300c65b7d43b90946 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:21:27 np0005558317 sleepy_pascal[127161]: 167 167
Dec 13 02:21:27 np0005558317 systemd[1]: libpod-5e22ec5deda52c7fe6fa044f11e4e10d68e93ea7c462d79300c65b7d43b90946.scope: Deactivated successfully.
Dec 13 02:21:27 np0005558317 podman[127148]: 2025-12-13 07:21:27.80455638 +0000 UTC m=+0.091774655 container died 5e22ec5deda52c7fe6fa044f11e4e10d68e93ea7c462d79300c65b7d43b90946 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 02:21:27 np0005558317 podman[127148]: 2025-12-13 07:21:27.821251031 +0000 UTC m=+0.108469285 container remove 5e22ec5deda52c7fe6fa044f11e4e10d68e93ea7c462d79300c65b7d43b90946 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_pascal, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:21:27 np0005558317 podman[127148]: 2025-12-13 07:21:27.730823949 +0000 UTC m=+0.018042203 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:21:27 np0005558317 systemd[1]: libpod-conmon-5e22ec5deda52c7fe6fa044f11e4e10d68e93ea7c462d79300c65b7d43b90946.scope: Deactivated successfully.
Dec 13 02:21:27 np0005558317 systemd[1]: var-lib-containers-storage-overlay-1f242c5b8f303ae8d7ab0e8a1f726cf43054a56664d6192b8dc14ceab8221d85-merged.mount: Deactivated successfully.
Dec 13 02:21:27 np0005558317 podman[127252]: 2025-12-13 07:21:27.943421852 +0000 UTC m=+0.029202556 container create 8c6a3b4f5421b33956b0fb2100f80aca6e3d5a2cfdb82117888e28bcd805c2ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:21:27 np0005558317 systemd[1]: Started libpod-conmon-8c6a3b4f5421b33956b0fb2100f80aca6e3d5a2cfdb82117888e28bcd805c2ca.scope.
Dec 13 02:21:27 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:21:27 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c92c191c52649bc524365de8a435b5b176d3b4d679e5a41b91b81a945048ab5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:21:27 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c92c191c52649bc524365de8a435b5b176d3b4d679e5a41b91b81a945048ab5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:21:27 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c92c191c52649bc524365de8a435b5b176d3b4d679e5a41b91b81a945048ab5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:21:27 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c92c191c52649bc524365de8a435b5b176d3b4d679e5a41b91b81a945048ab5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:21:28 np0005558317 podman[127252]: 2025-12-13 07:21:28.009574689 +0000 UTC m=+0.095355393 container init 8c6a3b4f5421b33956b0fb2100f80aca6e3d5a2cfdb82117888e28bcd805c2ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_murdock, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:21:28 np0005558317 podman[127252]: 2025-12-13 07:21:28.01433275 +0000 UTC m=+0.100113454 container start 8c6a3b4f5421b33956b0fb2100f80aca6e3d5a2cfdb82117888e28bcd805c2ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_murdock, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:21:28 np0005558317 podman[127252]: 2025-12-13 07:21:28.015563944 +0000 UTC m=+0.101344648 container attach 8c6a3b4f5421b33956b0fb2100f80aca6e3d5a2cfdb82117888e28bcd805c2ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Dec 13 02:21:28 np0005558317 podman[127252]: 2025-12-13 07:21:27.931609165 +0000 UTC m=+0.017389879 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:21:28 np0005558317 python3.9[127266]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610487.0310538-65-85864415652591/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=93ffe3a97552f1a61bab50e724ad94c02d337e62 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:21:28 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v341: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:28 np0005558317 python3.9[127484]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:21:28 np0005558317 lvm[127499]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:21:28 np0005558317 lvm[127499]: VG ceph_vg0 finished
Dec 13 02:21:28 np0005558317 lvm[127502]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:21:28 np0005558317 lvm[127502]: VG ceph_vg1 finished
Dec 13 02:21:28 np0005558317 lvm[127505]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:21:28 np0005558317 lvm[127505]: VG ceph_vg2 finished
Dec 13 02:21:28 np0005558317 lvm[127506]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:21:28 np0005558317 romantic_murdock[127272]: {}
Dec 13 02:21:28 np0005558317 lvm[127506]: VG ceph_vg1 finished
Dec 13 02:21:28 np0005558317 systemd[1]: libpod-8c6a3b4f5421b33956b0fb2100f80aca6e3d5a2cfdb82117888e28bcd805c2ca.scope: Deactivated successfully.
Dec 13 02:21:28 np0005558317 podman[127252]: 2025-12-13 07:21:28.657720269 +0000 UTC m=+0.743500973 container died 8c6a3b4f5421b33956b0fb2100f80aca6e3d5a2cfdb82117888e28bcd805c2ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 02:21:28 np0005558317 systemd[1]: var-lib-containers-storage-overlay-8c92c191c52649bc524365de8a435b5b176d3b4d679e5a41b91b81a945048ab5-merged.mount: Deactivated successfully.
Dec 13 02:21:28 np0005558317 podman[127252]: 2025-12-13 07:21:28.680402722 +0000 UTC m=+0.766183427 container remove 8c6a3b4f5421b33956b0fb2100f80aca6e3d5a2cfdb82117888e28bcd805c2ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:21:28 np0005558317 systemd[1]: libpod-conmon-8c6a3b4f5421b33956b0fb2100f80aca6e3d5a2cfdb82117888e28bcd805c2ca.scope: Deactivated successfully.
Dec 13 02:21:28 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:21:28 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:21:28 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:21:28 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:21:29 np0005558317 python3.9[127664]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610488.207571-65-80476117840165/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=aa3fdf38ba0502d304f7711b68e366c39f369e39 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:21:29 np0005558317 python3.9[127816]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:21:29 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:21:29 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:21:29 np0005558317 python3.9[127939]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610489.110907-65-135232022985091/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=53702116d63484fa29e275004baf9ef3d140f885 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:21:30 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v342: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:30 np0005558317 python3.9[128091]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:21:30 np0005558317 python3.9[128243]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:21:31 np0005558317 python3.9[128395]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:21:31 np0005558317 python3.9[128518]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610490.957922-124-123964135607505/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=9b324efd75b06f149665358bc5a26a4d083e28e1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:21:32 np0005558317 python3.9[128670]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:21:32 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v343: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:32 np0005558317 python3.9[128793]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610491.7980857-124-196559809791329/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=f836755fbf73bec0ca76e493d55feda43385126d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:21:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:21:33 np0005558317 python3.9[128945]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:21:33 np0005558317 python3.9[129068]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610492.6338096-124-32039470112960/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=45e7071bcb8a7db3af7217195287d138c3b8e819 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:21:34 np0005558317 python3.9[129220]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:21:34 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v344: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:34 np0005558317 python3.9[129372]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:21:35 np0005558317 python3.9[129524]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:21:35 np0005558317 python3.9[129647]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610494.6801355-183-255085279252703/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=eed392971785171bf75d6f89c2fe22f844cb5eb7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:21:35 np0005558317 python3.9[129799]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:21:36 np0005558317 python3.9[129922]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610495.562342-183-207839471850124/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=f836755fbf73bec0ca76e493d55feda43385126d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:21:36 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v345: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:36 np0005558317 python3.9[130074]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:21:37 np0005558317 python3.9[130197]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610496.3566833-183-43751927436608/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=6e664a5b446396149ea98486bb38e13e356dec0e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:21:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:21:38 np0005558317 python3.9[130349]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:21:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:21:38
Dec 13 02:21:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:21:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 02:21:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'images', 'default.rgw.control', 'backups', 'default.rgw.log', '.mgr', '.rgw.root', 'volumes']
Dec 13 02:21:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:21:38 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v346: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:38 np0005558317 python3.9[130501]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:21:38 np0005558317 python3.9[130624]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610498.1701996-251-199209639023605/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac04d306f192c0875048c78c53711957498c3ede backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:21:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:21:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:21:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:21:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:21:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:21:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:21:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:21:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:21:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:21:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:21:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:21:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:21:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:21:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:21:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:21:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:21:39 np0005558317 python3.9[130776]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:21:39 np0005558317 python3.9[130928]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:21:40 np0005558317 python3.9[131051]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610499.5764947-275-145744352811170/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac04d306f192c0875048c78c53711957498c3ede backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:21:40 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v347: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:40 np0005558317 python3.9[131203]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:21:41 np0005558317 python3.9[131355]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:21:41 np0005558317 python3.9[131478]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610500.9542894-299-26184027675444/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac04d306f192c0875048c78c53711957498c3ede backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:21:42 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v348: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:42 np0005558317 python3.9[131630]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:21:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:21:42 np0005558317 python3.9[131782]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:21:43 np0005558317 python3.9[131905]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610502.4866176-323-174055833899131/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac04d306f192c0875048c78c53711957498c3ede backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:21:43 np0005558317 python3.9[132057]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:21:44 np0005558317 python3.9[132209]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:21:44 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v349: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:44 np0005558317 python3.9[132332]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610503.8916225-347-121092192754256/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac04d306f192c0875048c78c53711957498c3ede backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:21:45 np0005558317 python3.9[132484]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:21:45 np0005558317 python3.9[132636]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:21:45 np0005558317 python3.9[132759]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610505.2772593-371-218379423811047/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac04d306f192c0875048c78c53711957498c3ede backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:21:46 np0005558317 systemd[1]: session-46.scope: Deactivated successfully.
Dec 13 02:21:46 np0005558317 systemd[1]: session-46.scope: Consumed 15.980s CPU time.
Dec 13 02:21:46 np0005558317 systemd-logind[745]: Session 46 logged out. Waiting for processes to exit.
Dec 13 02:21:46 np0005558317 systemd-logind[745]: Removed session 46.
Dec 13 02:21:46 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v350: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:21:48 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v351: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:21:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:21:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:21:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:21:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:21:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:21:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:21:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:21:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:21:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:21:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:21:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:21:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 02:21:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:21:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:21:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:21:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:21:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:21:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:21:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:21:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:21:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:21:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:21:50 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v352: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:51 np0005558317 systemd-logind[745]: New session 47 of user zuul.
Dec 13 02:21:51 np0005558317 systemd[1]: Started Session 47 of User zuul.
Dec 13 02:21:52 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v353: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:52 np0005558317 python3.9[132939]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:21:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:21:52 np0005558317 python3.9[133091]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:21:53 np0005558317 python3.9[133214]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610512.47459-34-206133187231405/.source.conf _original_basename=ceph.conf follow=False checksum=f9f4c7f65fdb2c19267612cdcf348da04cb4206e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:21:53 np0005558317 python3.9[133366]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:21:54 np0005558317 python3.9[133489]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610513.5907702-34-275441815188965/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=89bb88aee4825eacb5f29faabebd795dc909bcd4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:21:54 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v354: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:54 np0005558317 systemd[1]: session-47.scope: Deactivated successfully.
Dec 13 02:21:54 np0005558317 systemd[1]: session-47.scope: Consumed 1.914s CPU time.
Dec 13 02:21:54 np0005558317 systemd-logind[745]: Session 47 logged out. Waiting for processes to exit.
Dec 13 02:21:54 np0005558317 systemd-logind[745]: Removed session 47.
Dec 13 02:21:56 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v355: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:21:58 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v356: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:21:59 np0005558317 systemd-logind[745]: New session 48 of user zuul.
Dec 13 02:21:59 np0005558317 systemd[1]: Started Session 48 of User zuul.
Dec 13 02:22:00 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v357: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:00 np0005558317 python3.9[133667]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:22:01 np0005558317 python3.9[133823]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:22:01 np0005558317 python3.9[133975]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:22:02 np0005558317 python3.9[134125]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:22:02 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v358: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:22:02 np0005558317 python3.9[134277]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 13 02:22:04 np0005558317 dbus-broker-launch[734]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec 13 02:22:04 np0005558317 python3.9[134433]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:22:04 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v359: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:05 np0005558317 python3.9[134517]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:22:06 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v360: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:06 np0005558317 python3.9[134670]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 02:22:07 np0005558317 python3[134825]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec 13 02:22:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:22:08 np0005558317 python3.9[134977]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:22:08 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v361: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:08 np0005558317 python3.9[135129]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:22:08 np0005558317 python3.9[135207]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:22:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:22:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:22:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:22:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:22:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:22:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:22:09 np0005558317 python3.9[135359]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:22:09 np0005558317 python3.9[135437]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.lzwq0wod recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:22:10 np0005558317 python3.9[135589]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:22:10 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v362: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:10 np0005558317 python3.9[135667]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:22:11 np0005558317 python3.9[135819]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:22:11 np0005558317 python3[135972]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 13 02:22:12 np0005558317 python3.9[136124]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:22:12 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v363: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:22:12 np0005558317 python3.9[136249]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610531.8916469-157-33462308288053/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:22:13 np0005558317 python3.9[136401]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:22:13 np0005558317 python3.9[136526]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610533.0020528-172-113769241810849/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:22:14 np0005558317 python3.9[136678]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:22:14 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v364: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:14 np0005558317 python3.9[136803]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610533.933562-187-192852118733503/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:22:15 np0005558317 python3.9[136955]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:22:15 np0005558317 python3.9[137080]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610534.8378847-202-266397596612345/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:22:16 np0005558317 python3.9[137232]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:22:16 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v365: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:16 np0005558317 python3.9[137357]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610535.9171937-217-212678752126689/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:22:17 np0005558317 python3.9[137509]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:22:17 np0005558317 python3.9[137661]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:22:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:22:18 np0005558317 python3.9[137816]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:22:18 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v366: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:18 np0005558317 python3.9[137968]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:22:19 np0005558317 python3.9[138121]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:22:19 np0005558317 python3.9[138275]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:22:20 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:20 np0005558317 python3.9[138430]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:22:21 np0005558317 python3.9[138580]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:22:22 np0005558317 python3.9[138733]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:cb:58:d7:dd" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:22:22 np0005558317 ovs-vsctl[138734]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:cb:58:d7:dd external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec 13 02:22:22 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:22 np0005558317 python3.9[138886]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:22:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:22:23 np0005558317 python3.9[139041]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:22:23 np0005558317 ovs-vsctl[139042]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec 13 02:22:23 np0005558317 python3.9[139192]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:22:24 np0005558317 python3.9[139346]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:22:24 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:24 np0005558317 python3.9[139498]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:22:25 np0005558317 python3.9[139576]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:22:25 np0005558317 python3.9[139728]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:22:25 np0005558317 python3.9[139806]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:22:26 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:26 np0005558317 python3.9[139958]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:22:26 np0005558317 python3.9[140110]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:22:27 np0005558317 python3.9[140188]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.744809) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610547744935, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1442, "num_deletes": 250, "total_data_size": 2249453, "memory_usage": 2286904, "flush_reason": "Manual Compaction"}
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610547751478, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1299021, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7381, "largest_seqno": 8822, "table_properties": {"data_size": 1294085, "index_size": 2204, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12798, "raw_average_key_size": 19, "raw_value_size": 1283100, "raw_average_value_size": 1998, "num_data_blocks": 104, "num_entries": 642, "num_filter_entries": 642, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610397, "oldest_key_time": 1765610397, "file_creation_time": 1765610547, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 6729 microseconds, and 5585 cpu microseconds.
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.751565) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1299021 bytes OK
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.751613) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.752041) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.752053) EVENT_LOG_v1 {"time_micros": 1765610547752050, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.752076) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2243069, prev total WAL file size 2243069, number of live WAL files 2.
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.752912) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1268KB)], [20(7551KB)]
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610547752993, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9031984, "oldest_snapshot_seqno": -1}
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3317 keys, 6892166 bytes, temperature: kUnknown
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610547769544, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6892166, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6867192, "index_size": 15585, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 79564, "raw_average_key_size": 23, "raw_value_size": 6804386, "raw_average_value_size": 2051, "num_data_blocks": 691, "num_entries": 3317, "num_filter_entries": 3317, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610001, "oldest_key_time": 0, "file_creation_time": 1765610547, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.769736) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6892166 bytes
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.770147) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 544.2 rd, 415.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 7.4 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(12.3) write-amplify(5.3) OK, records in: 3760, records dropped: 443 output_compression: NoCompression
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.770165) EVENT_LOG_v1 {"time_micros": 1765610547770155, "job": 6, "event": "compaction_finished", "compaction_time_micros": 16596, "compaction_time_cpu_micros": 13938, "output_level": 6, "num_output_files": 1, "total_output_size": 6892166, "num_input_records": 3760, "num_output_records": 3317, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610547770461, "job": 6, "event": "table_file_deletion", "file_number": 22}
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610547771283, "job": 6, "event": "table_file_deletion", "file_number": 20}
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.752808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.771322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.771325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.771327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.771329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:22:27 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.771330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:22:27 np0005558317 python3.9[140340]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:22:28 np0005558317 python3.9[140418]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:22:28 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:28 np0005558317 python3.9[140570]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:22:28 np0005558317 systemd[1]: Reloading.
Dec 13 02:22:29 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:22:29 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:22:29 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:22:29 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:22:29 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:22:29 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:22:29 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:22:29 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:22:29 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:22:29 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:22:29 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:22:29 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:22:29 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:22:29 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:22:29 np0005558317 podman[140898]: 2025-12-13 07:22:29.641956718 +0000 UTC m=+0.028716770 container create 529a2aed9e55cc78e952676bd272abf013507d2f492b9865b8009031d93ecbe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:22:29 np0005558317 systemd[1]: Started libpod-conmon-529a2aed9e55cc78e952676bd272abf013507d2f492b9865b8009031d93ecbe5.scope.
Dec 13 02:22:29 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:22:29 np0005558317 python3.9[140886]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:22:29 np0005558317 podman[140898]: 2025-12-13 07:22:29.705621196 +0000 UTC m=+0.092381268 container init 529a2aed9e55cc78e952676bd272abf013507d2f492b9865b8009031d93ecbe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 02:22:29 np0005558317 podman[140898]: 2025-12-13 07:22:29.712594249 +0000 UTC m=+0.099354311 container start 529a2aed9e55cc78e952676bd272abf013507d2f492b9865b8009031d93ecbe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_leavitt, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 02:22:29 np0005558317 podman[140898]: 2025-12-13 07:22:29.713878982 +0000 UTC m=+0.100639034 container attach 529a2aed9e55cc78e952676bd272abf013507d2f492b9865b8009031d93ecbe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_leavitt, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 02:22:29 np0005558317 pensive_leavitt[140911]: 167 167
Dec 13 02:22:29 np0005558317 systemd[1]: libpod-529a2aed9e55cc78e952676bd272abf013507d2f492b9865b8009031d93ecbe5.scope: Deactivated successfully.
Dec 13 02:22:29 np0005558317 podman[140898]: 2025-12-13 07:22:29.719925866 +0000 UTC m=+0.106686068 container died 529a2aed9e55cc78e952676bd272abf013507d2f492b9865b8009031d93ecbe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 02:22:29 np0005558317 podman[140898]: 2025-12-13 07:22:29.63016627 +0000 UTC m=+0.016926332 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:22:29 np0005558317 systemd[1]: var-lib-containers-storage-overlay-f9f7b694e735924e00b219a792064c0b43bb18e6db39d011ac23dd82162e84a2-merged.mount: Deactivated successfully.
Dec 13 02:22:29 np0005558317 podman[140898]: 2025-12-13 07:22:29.743940506 +0000 UTC m=+0.130700559 container remove 529a2aed9e55cc78e952676bd272abf013507d2f492b9865b8009031d93ecbe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_leavitt, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 02:22:29 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:22:29 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:22:29 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:22:29 np0005558317 systemd[1]: libpod-conmon-529a2aed9e55cc78e952676bd272abf013507d2f492b9865b8009031d93ecbe5.scope: Deactivated successfully.
Dec 13 02:22:29 np0005558317 podman[140958]: 2025-12-13 07:22:29.876126303 +0000 UTC m=+0.034805152 container create 90a2cd0f572960ab4503334a7c06179bfbdb3d486b68f7304ebaf3c243ab7c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 02:22:29 np0005558317 systemd[1]: Started libpod-conmon-90a2cd0f572960ab4503334a7c06179bfbdb3d486b68f7304ebaf3c243ab7c5c.scope.
Dec 13 02:22:29 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:22:29 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4687a1bfd69030ed94daa2f5d58c1f1647b7f21b2248fbabdb484d36c69e02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:22:29 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4687a1bfd69030ed94daa2f5d58c1f1647b7f21b2248fbabdb484d36c69e02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:22:29 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4687a1bfd69030ed94daa2f5d58c1f1647b7f21b2248fbabdb484d36c69e02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:22:29 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4687a1bfd69030ed94daa2f5d58c1f1647b7f21b2248fbabdb484d36c69e02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:22:29 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4687a1bfd69030ed94daa2f5d58c1f1647b7f21b2248fbabdb484d36c69e02/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:22:29 np0005558317 podman[140958]: 2025-12-13 07:22:29.939840353 +0000 UTC m=+0.098519203 container init 90a2cd0f572960ab4503334a7c06179bfbdb3d486b68f7304ebaf3c243ab7c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_elgamal, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:22:29 np0005558317 podman[140958]: 2025-12-13 07:22:29.945593486 +0000 UTC m=+0.104272335 container start 90a2cd0f572960ab4503334a7c06179bfbdb3d486b68f7304ebaf3c243ab7c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_elgamal, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:22:29 np0005558317 podman[140958]: 2025-12-13 07:22:29.946698842 +0000 UTC m=+0.105377681 container attach 90a2cd0f572960ab4503334a7c06179bfbdb3d486b68f7304ebaf3c243ab7c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_elgamal, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:22:29 np0005558317 podman[140958]: 2025-12-13 07:22:29.862891804 +0000 UTC m=+0.021570672 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:22:30 np0005558317 python3.9[141026]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:22:30 np0005558317 pedantic_elgamal[141015]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:22:30 np0005558317 pedantic_elgamal[141015]: --> All data devices are unavailable
Dec 13 02:22:30 np0005558317 systemd[1]: libpod-90a2cd0f572960ab4503334a7c06179bfbdb3d486b68f7304ebaf3c243ab7c5c.scope: Deactivated successfully.
Dec 13 02:22:30 np0005558317 podman[140958]: 2025-12-13 07:22:30.327556802 +0000 UTC m=+0.486235641 container died 90a2cd0f572960ab4503334a7c06179bfbdb3d486b68f7304ebaf3c243ab7c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_elgamal, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 02:22:30 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:30 np0005558317 systemd[1]: var-lib-containers-storage-overlay-dc4687a1bfd69030ed94daa2f5d58c1f1647b7f21b2248fbabdb484d36c69e02-merged.mount: Deactivated successfully.
Dec 13 02:22:30 np0005558317 podman[140958]: 2025-12-13 07:22:30.354997226 +0000 UTC m=+0.513676075 container remove 90a2cd0f572960ab4503334a7c06179bfbdb3d486b68f7304ebaf3c243ab7c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_elgamal, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 02:22:30 np0005558317 systemd[1]: libpod-conmon-90a2cd0f572960ab4503334a7c06179bfbdb3d486b68f7304ebaf3c243ab7c5c.scope: Deactivated successfully.
Dec 13 02:22:30 np0005558317 python3.9[141254]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:22:30 np0005558317 podman[141267]: 2025-12-13 07:22:30.745060712 +0000 UTC m=+0.026947959 container create ebb70961688b1c9eab28abe7281879474033066c52ed44a56418eb836508a899 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 02:22:30 np0005558317 systemd[1]: Started libpod-conmon-ebb70961688b1c9eab28abe7281879474033066c52ed44a56418eb836508a899.scope.
Dec 13 02:22:30 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:22:30 np0005558317 podman[141267]: 2025-12-13 07:22:30.80530075 +0000 UTC m=+0.087188018 container init ebb70961688b1c9eab28abe7281879474033066c52ed44a56418eb836508a899 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 02:22:30 np0005558317 podman[141267]: 2025-12-13 07:22:30.810343469 +0000 UTC m=+0.092230717 container start ebb70961688b1c9eab28abe7281879474033066c52ed44a56418eb836508a899 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_visvesvaraya, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:22:30 np0005558317 podman[141267]: 2025-12-13 07:22:30.811471115 +0000 UTC m=+0.093358354 container attach ebb70961688b1c9eab28abe7281879474033066c52ed44a56418eb836508a899 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_visvesvaraya, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:22:30 np0005558317 sleepy_visvesvaraya[141303]: 167 167
Dec 13 02:22:30 np0005558317 systemd[1]: libpod-ebb70961688b1c9eab28abe7281879474033066c52ed44a56418eb836508a899.scope: Deactivated successfully.
Dec 13 02:22:30 np0005558317 conmon[141303]: conmon ebb70961688b1c9eab28 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ebb70961688b1c9eab28abe7281879474033066c52ed44a56418eb836508a899.scope/container/memory.events
Dec 13 02:22:30 np0005558317 podman[141267]: 2025-12-13 07:22:30.814174853 +0000 UTC m=+0.096062101 container died ebb70961688b1c9eab28abe7281879474033066c52ed44a56418eb836508a899 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_visvesvaraya, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 02:22:30 np0005558317 podman[141267]: 2025-12-13 07:22:30.73322074 +0000 UTC m=+0.015108008 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:22:30 np0005558317 systemd[1]: var-lib-containers-storage-overlay-a0e503ffcfaafacfc947c6fd599ec32a5d67a36d332a9d36992be412cd39345a-merged.mount: Deactivated successfully.
Dec 13 02:22:30 np0005558317 podman[141267]: 2025-12-13 07:22:30.840122472 +0000 UTC m=+0.122009720 container remove ebb70961688b1c9eab28abe7281879474033066c52ed44a56418eb836508a899 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_visvesvaraya, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:22:30 np0005558317 systemd[1]: libpod-conmon-ebb70961688b1c9eab28abe7281879474033066c52ed44a56418eb836508a899.scope: Deactivated successfully.
Dec 13 02:22:30 np0005558317 podman[141377]: 2025-12-13 07:22:30.968114174 +0000 UTC m=+0.033625377 container create bc680d267c05caca87c481d192e0aa7e849e71636a05a8fc0588e7d7f90b878c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_herschel, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 02:22:30 np0005558317 systemd[1]: Started libpod-conmon-bc680d267c05caca87c481d192e0aa7e849e71636a05a8fc0588e7d7f90b878c.scope.
Dec 13 02:22:31 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:22:31 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a204b34ecab8bcc113462c30dce111de6be6481071ef7e8cdb437dc0879475a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:22:31 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a204b34ecab8bcc113462c30dce111de6be6481071ef7e8cdb437dc0879475a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:22:31 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a204b34ecab8bcc113462c30dce111de6be6481071ef7e8cdb437dc0879475a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:22:31 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a204b34ecab8bcc113462c30dce111de6be6481071ef7e8cdb437dc0879475a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:22:31 np0005558317 podman[141377]: 2025-12-13 07:22:31.033861523 +0000 UTC m=+0.099372747 container init bc680d267c05caca87c481d192e0aa7e849e71636a05a8fc0588e7d7f90b878c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:22:31 np0005558317 podman[141377]: 2025-12-13 07:22:31.039674007 +0000 UTC m=+0.105185209 container start bc680d267c05caca87c481d192e0aa7e849e71636a05a8fc0588e7d7f90b878c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:22:31 np0005558317 podman[141377]: 2025-12-13 07:22:31.041022278 +0000 UTC m=+0.106533481 container attach bc680d267c05caca87c481d192e0aa7e849e71636a05a8fc0588e7d7f90b878c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:22:31 np0005558317 podman[141377]: 2025-12-13 07:22:30.955029826 +0000 UTC m=+0.020541049 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:22:31 np0005558317 python3.9[141371]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:22:31 np0005558317 nice_herschel[141390]: {
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:    "0": [
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:        {
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "devices": [
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "/dev/loop3"
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            ],
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "lv_name": "ceph_lv0",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "lv_size": "21470642176",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "name": "ceph_lv0",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "tags": {
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.cluster_name": "ceph",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.crush_device_class": "",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.encrypted": "0",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.objectstore": "bluestore",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.osd_id": "0",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.type": "block",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.vdo": "0",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.with_tpm": "0"
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            },
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "type": "block",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "vg_name": "ceph_vg0"
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:        }
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:    ],
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:    "1": [
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:        {
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "devices": [
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "/dev/loop4"
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            ],
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "lv_name": "ceph_lv1",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "lv_size": "21470642176",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "name": "ceph_lv1",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "tags": {
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.cluster_name": "ceph",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.crush_device_class": "",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.encrypted": "0",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.objectstore": "bluestore",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.osd_id": "1",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.type": "block",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.vdo": "0",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.with_tpm": "0"
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            },
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "type": "block",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "vg_name": "ceph_vg1"
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:        }
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:    ],
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:    "2": [
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:        {
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "devices": [
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "/dev/loop5"
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            ],
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "lv_name": "ceph_lv2",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "lv_size": "21470642176",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "name": "ceph_lv2",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "tags": {
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.cluster_name": "ceph",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.crush_device_class": "",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.encrypted": "0",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.objectstore": "bluestore",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.osd_id": "2",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.type": "block",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.vdo": "0",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:                "ceph.with_tpm": "0"
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            },
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "type": "block",
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:            "vg_name": "ceph_vg2"
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:        }
Dec 13 02:22:31 np0005558317 nice_herschel[141390]:    ]
Dec 13 02:22:31 np0005558317 nice_herschel[141390]: }
Dec 13 02:22:31 np0005558317 systemd[1]: libpod-bc680d267c05caca87c481d192e0aa7e849e71636a05a8fc0588e7d7f90b878c.scope: Deactivated successfully.
Dec 13 02:22:31 np0005558317 podman[141377]: 2025-12-13 07:22:31.303692126 +0000 UTC m=+0.369203329 container died bc680d267c05caca87c481d192e0aa7e849e71636a05a8fc0588e7d7f90b878c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_herschel, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 02:22:31 np0005558317 systemd[1]: var-lib-containers-storage-overlay-8a204b34ecab8bcc113462c30dce111de6be6481071ef7e8cdb437dc0879475a-merged.mount: Deactivated successfully.
Dec 13 02:22:31 np0005558317 podman[141377]: 2025-12-13 07:22:31.331913395 +0000 UTC m=+0.397424598 container remove bc680d267c05caca87c481d192e0aa7e849e71636a05a8fc0588e7d7f90b878c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_herschel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:22:31 np0005558317 systemd[1]: libpod-conmon-bc680d267c05caca87c481d192e0aa7e849e71636a05a8fc0588e7d7f90b878c.scope: Deactivated successfully.
Dec 13 02:22:31 np0005558317 podman[141620]: 2025-12-13 07:22:31.697841644 +0000 UTC m=+0.029106382 container create 3f71edb44a600277a78cb5b031b4b3ab80b64e7f823c32840ba3311480752018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_benz, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Dec 13 02:22:31 np0005558317 python3.9[141586]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:22:31 np0005558317 systemd[1]: Reloading.
Dec 13 02:22:31 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:22:31 np0005558317 podman[141620]: 2025-12-13 07:22:31.685825963 +0000 UTC m=+0.017090721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:22:31 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:22:31 np0005558317 systemd[1]: Started libpod-conmon-3f71edb44a600277a78cb5b031b4b3ab80b64e7f823c32840ba3311480752018.scope.
Dec 13 02:22:31 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:22:31 np0005558317 podman[141620]: 2025-12-13 07:22:31.970180807 +0000 UTC m=+0.301445555 container init 3f71edb44a600277a78cb5b031b4b3ab80b64e7f823c32840ba3311480752018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_benz, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:22:31 np0005558317 podman[141620]: 2025-12-13 07:22:31.976486847 +0000 UTC m=+0.307751585 container start 3f71edb44a600277a78cb5b031b4b3ab80b64e7f823c32840ba3311480752018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_benz, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 02:22:31 np0005558317 lucid_benz[141670]: 167 167
Dec 13 02:22:31 np0005558317 podman[141620]: 2025-12-13 07:22:31.981097895 +0000 UTC m=+0.312362643 container attach 3f71edb44a600277a78cb5b031b4b3ab80b64e7f823c32840ba3311480752018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_benz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:22:31 np0005558317 systemd[1]: libpod-3f71edb44a600277a78cb5b031b4b3ab80b64e7f823c32840ba3311480752018.scope: Deactivated successfully.
Dec 13 02:22:31 np0005558317 podman[141620]: 2025-12-13 07:22:31.981840899 +0000 UTC m=+0.313105638 container died 3f71edb44a600277a78cb5b031b4b3ab80b64e7f823c32840ba3311480752018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_benz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 02:22:31 np0005558317 systemd[1]: Starting Create netns directory...
Dec 13 02:22:31 np0005558317 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 13 02:22:31 np0005558317 systemd[1]: var-lib-containers-storage-overlay-980b0a69c0866cc6de50bc52d23e85f14c5c09ebcac4368e46d44ea12be00b10-merged.mount: Deactivated successfully.
Dec 13 02:22:32 np0005558317 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 13 02:22:32 np0005558317 systemd[1]: Finished Create netns directory.
Dec 13 02:22:32 np0005558317 podman[141620]: 2025-12-13 07:22:32.008069557 +0000 UTC m=+0.339334296 container remove 3f71edb44a600277a78cb5b031b4b3ab80b64e7f823c32840ba3311480752018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_benz, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 02:22:32 np0005558317 systemd[1]: libpod-conmon-3f71edb44a600277a78cb5b031b4b3ab80b64e7f823c32840ba3311480752018.scope: Deactivated successfully.
Dec 13 02:22:32 np0005558317 podman[141722]: 2025-12-13 07:22:32.140921585 +0000 UTC m=+0.031874359 container create 9ef07548b54809b544b4fe97ad0cc39724c849e2dd48efe485909f0a717a5ecc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_kirch, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:22:32 np0005558317 systemd[1]: Started libpod-conmon-9ef07548b54809b544b4fe97ad0cc39724c849e2dd48efe485909f0a717a5ecc.scope.
Dec 13 02:22:32 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:22:32 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08fa3beb7d57924b54e17ac3c72e5bc23bb316ccbf0bfa3dd98f49da725ae81c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:22:32 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08fa3beb7d57924b54e17ac3c72e5bc23bb316ccbf0bfa3dd98f49da725ae81c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:22:32 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08fa3beb7d57924b54e17ac3c72e5bc23bb316ccbf0bfa3dd98f49da725ae81c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:22:32 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08fa3beb7d57924b54e17ac3c72e5bc23bb316ccbf0bfa3dd98f49da725ae81c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:22:32 np0005558317 podman[141722]: 2025-12-13 07:22:32.216293315 +0000 UTC m=+0.107246109 container init 9ef07548b54809b544b4fe97ad0cc39724c849e2dd48efe485909f0a717a5ecc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:22:32 np0005558317 podman[141722]: 2025-12-13 07:22:32.221688585 +0000 UTC m=+0.112641359 container start 9ef07548b54809b544b4fe97ad0cc39724c849e2dd48efe485909f0a717a5ecc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_kirch, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:22:32 np0005558317 podman[141722]: 2025-12-13 07:22:32.223338153 +0000 UTC m=+0.114290937 container attach 9ef07548b54809b544b4fe97ad0cc39724c849e2dd48efe485909f0a717a5ecc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 02:22:32 np0005558317 podman[141722]: 2025-12-13 07:22:32.128798793 +0000 UTC m=+0.019751587 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:22:32 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:32 np0005558317 python3.9[141876]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:22:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:22:32 np0005558317 lvm[141988]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:22:32 np0005558317 lvm[141987]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:22:32 np0005558317 lvm[141988]: VG ceph_vg1 finished
Dec 13 02:22:32 np0005558317 lvm[141987]: VG ceph_vg0 finished
Dec 13 02:22:32 np0005558317 lvm[141991]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:22:32 np0005558317 lvm[141991]: VG ceph_vg2 finished
Dec 13 02:22:32 np0005558317 wonderful_kirch[141735]: {}
Dec 13 02:22:32 np0005558317 podman[141722]: 2025-12-13 07:22:32.889647757 +0000 UTC m=+0.780600531 container died 9ef07548b54809b544b4fe97ad0cc39724c849e2dd48efe485909f0a717a5ecc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 02:22:32 np0005558317 systemd[1]: libpod-9ef07548b54809b544b4fe97ad0cc39724c849e2dd48efe485909f0a717a5ecc.scope: Deactivated successfully.
Dec 13 02:22:32 np0005558317 systemd[1]: libpod-9ef07548b54809b544b4fe97ad0cc39724c849e2dd48efe485909f0a717a5ecc.scope: Consumed 1.017s CPU time.
Dec 13 02:22:32 np0005558317 systemd[1]: var-lib-containers-storage-overlay-08fa3beb7d57924b54e17ac3c72e5bc23bb316ccbf0bfa3dd98f49da725ae81c-merged.mount: Deactivated successfully.
Dec 13 02:22:32 np0005558317 podman[141722]: 2025-12-13 07:22:32.914530808 +0000 UTC m=+0.805483582 container remove 9ef07548b54809b544b4fe97ad0cc39724c849e2dd48efe485909f0a717a5ecc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:22:32 np0005558317 systemd[1]: libpod-conmon-9ef07548b54809b544b4fe97ad0cc39724c849e2dd48efe485909f0a717a5ecc.scope: Deactivated successfully.
Dec 13 02:22:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:22:32 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:22:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:22:32 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:22:33 np0005558317 python3.9[142132]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:22:33 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:22:33 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:22:33 np0005558317 python3.9[142255]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610552.7842076-468-161078544965777/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:22:34 np0005558317 python3.9[142407]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:22:34 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:34 np0005558317 python3.9[142559]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:22:35 np0005558317 python3.9[142682]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610554.4789786-493-272967139376869/.source.json _original_basename=.lvsilzj8 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:22:35 np0005558317 python3.9[142834]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:22:36 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:37 np0005558317 python3.9[143261]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec 13 02:22:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:22:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:22:38
Dec 13 02:22:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:22:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 02:22:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', '.rgw.root', 'images', 'vms']
Dec 13 02:22:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:22:38 np0005558317 python3.9[143413]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 13 02:22:38 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:38 np0005558317 python3.9[143565]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 13 02:22:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:22:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:22:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:22:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:22:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:22:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:22:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:22:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:22:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:22:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:22:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:22:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:22:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:22:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:22:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:22:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:22:40 np0005558317 python3[143737]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 13 02:22:40 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:42 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:22:44 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:45 np0005558317 podman[143748]: 2025-12-13 07:22:45.586237452 +0000 UTC m=+5.361082652 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de
Dec 13 02:22:45 np0005558317 podman[143847]: 2025-12-13 07:22:45.701485826 +0000 UTC m=+0.033874444 container create d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:22:45 np0005558317 podman[143847]: 2025-12-13 07:22:45.687286355 +0000 UTC m=+0.019674993 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de
Dec 13 02:22:45 np0005558317 python3[143737]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de
Dec 13 02:22:46 np0005558317 python3.9[144026]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:22:46 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:46 np0005558317 python3.9[144180]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:22:47 np0005558317 python3.9[144256]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:22:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:22:47 np0005558317 python3.9[144407]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765610567.3360138-581-20068041669165/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:22:48 np0005558317 python3.9[144483]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 02:22:48 np0005558317 systemd[1]: Reloading.
Dec 13 02:22:48 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:48 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:22:48 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:22:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:22:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:22:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:22:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:22:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:22:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:22:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:22:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:22:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:22:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:22:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:22:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:22:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 02:22:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:22:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:22:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:22:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:22:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:22:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:22:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:22:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:22:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:22:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:22:49 np0005558317 python3.9[144594]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:22:49 np0005558317 systemd[1]: Reloading.
Dec 13 02:22:49 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:22:49 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:22:49 np0005558317 systemd[1]: Starting ovn_controller container...
Dec 13 02:22:49 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:22:49 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97e60b40e88c9d4273841e05a580f706923cb5a4635c1fb0bec6354585657969/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 13 02:22:49 np0005558317 systemd[1]: Started /usr/bin/podman healthcheck run d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed.
Dec 13 02:22:49 np0005558317 podman[144635]: 2025-12-13 07:22:49.420152805 +0000 UTC m=+0.087136491 container init d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: + sudo -E kolla_set_configs
Dec 13 02:22:49 np0005558317 podman[144635]: 2025-12-13 07:22:49.442638715 +0000 UTC m=+0.109622381 container start d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 02:22:49 np0005558317 edpm-start-podman-container[144635]: ovn_controller
Dec 13 02:22:49 np0005558317 systemd[1]: Created slice User Slice of UID 0.
Dec 13 02:22:49 np0005558317 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 13 02:22:49 np0005558317 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 13 02:22:49 np0005558317 systemd[1]: Starting User Manager for UID 0...
Dec 13 02:22:49 np0005558317 edpm-start-podman-container[144634]: Creating additional drop-in dependency for "ovn_controller" (d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed)
Dec 13 02:22:49 np0005558317 podman[144654]: 2025-12-13 07:22:49.523217503 +0000 UTC m=+0.072472828 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 02:22:49 np0005558317 systemd[1]: d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed-304b4c4e52c8cee3.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 02:22:49 np0005558317 systemd[1]: d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed-304b4c4e52c8cee3.service: Failed with result 'exit-code'.
Dec 13 02:22:49 np0005558317 systemd[1]: Reloading.
Dec 13 02:22:49 np0005558317 systemd[144676]: Queued start job for default target Main User Target.
Dec 13 02:22:49 np0005558317 systemd[144676]: Created slice User Application Slice.
Dec 13 02:22:49 np0005558317 systemd[144676]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 13 02:22:49 np0005558317 systemd[144676]: Started Daily Cleanup of User's Temporary Directories.
Dec 13 02:22:49 np0005558317 systemd[144676]: Reached target Paths.
Dec 13 02:22:49 np0005558317 systemd[144676]: Reached target Timers.
Dec 13 02:22:49 np0005558317 systemd[144676]: Starting D-Bus User Message Bus Socket...
Dec 13 02:22:49 np0005558317 systemd[144676]: Starting Create User's Volatile Files and Directories...
Dec 13 02:22:49 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:22:49 np0005558317 systemd[144676]: Listening on D-Bus User Message Bus Socket.
Dec 13 02:22:49 np0005558317 systemd[144676]: Reached target Sockets.
Dec 13 02:22:49 np0005558317 systemd[144676]: Finished Create User's Volatile Files and Directories.
Dec 13 02:22:49 np0005558317 systemd[144676]: Reached target Basic System.
Dec 13 02:22:49 np0005558317 systemd[144676]: Reached target Main User Target.
Dec 13 02:22:49 np0005558317 systemd[144676]: Startup finished in 108ms.
Dec 13 02:22:49 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:22:49 np0005558317 systemd[1]: Started User Manager for UID 0.
Dec 13 02:22:49 np0005558317 systemd[1]: Started ovn_controller container.
Dec 13 02:22:49 np0005558317 systemd[1]: Started Session c1 of User root.
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: INFO:__main__:Validating config file
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: INFO:__main__:Writing out command to execute
Dec 13 02:22:49 np0005558317 systemd[1]: session-c1.scope: Deactivated successfully.
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: ++ cat /run_command
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: + ARGS=
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: + sudo kolla_copy_cacerts
Dec 13 02:22:49 np0005558317 systemd[1]: Started Session c2 of User root.
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: + [[ ! -n '' ]]
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: + . kolla_extend_start
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: + umask 0022
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec 13 02:22:49 np0005558317 systemd[1]: session-c2.scope: Deactivated successfully.
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec 13 02:22:49 np0005558317 NetworkManager[48896]: <info>  [1765610569.8944] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Dec 13 02:22:49 np0005558317 NetworkManager[48896]: <info>  [1765610569.8950] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:22:49 np0005558317 NetworkManager[48896]: <warn>  [1765610569.8951] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 02:22:49 np0005558317 NetworkManager[48896]: <info>  [1765610569.8959] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec 13 02:22:49 np0005558317 NetworkManager[48896]: <info>  [1765610569.8964] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Dec 13 02:22:49 np0005558317 NetworkManager[48896]: <info>  [1765610569.8967] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 13 02:22:49 np0005558317 kernel: br-int: entered promiscuous mode
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00019|main|INFO|OVS feature set changed, force recompute.
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 13 02:22:49 np0005558317 ovn_controller[144647]: 2025-12-13T07:22:49Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 13 02:22:49 np0005558317 NetworkManager[48896]: <info>  [1765610569.9149] manager: (ovn-d8e85b-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec 13 02:22:49 np0005558317 kernel: genev_sys_6081: entered promiscuous mode
Dec 13 02:22:49 np0005558317 NetworkManager[48896]: <info>  [1765610569.9297] device (genev_sys_6081): carrier: link connected
Dec 13 02:22:49 np0005558317 NetworkManager[48896]: <info>  [1765610569.9299] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Dec 13 02:22:49 np0005558317 systemd-udevd[144798]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 02:22:49 np0005558317 systemd-udevd[144799]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 02:22:50 np0005558317 python3.9[144908]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:22:50 np0005558317 ovs-vsctl[144909]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec 13 02:22:50 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:50 np0005558317 python3.9[145061]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:22:50 np0005558317 ovs-vsctl[145063]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec 13 02:22:51 np0005558317 python3.9[145216]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:22:51 np0005558317 ovs-vsctl[145217]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec 13 02:22:51 np0005558317 systemd[1]: session-48.scope: Deactivated successfully.
Dec 13 02:22:51 np0005558317 systemd[1]: session-48.scope: Consumed 43.474s CPU time.
Dec 13 02:22:51 np0005558317 systemd-logind[745]: Session 48 logged out. Waiting for processes to exit.
Dec 13 02:22:51 np0005558317 systemd-logind[745]: Removed session 48.
Dec 13 02:22:52 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:22:54 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:56 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:57 np0005558317 systemd-logind[745]: New session 50 of user zuul.
Dec 13 02:22:57 np0005558317 systemd[1]: Started Session 50 of User zuul.
Dec 13 02:22:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:22:58 np0005558317 python3.9[145395]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:22:58 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:22:59 np0005558317 python3.9[145551]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:22:59 np0005558317 python3.9[145703]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:22:59 np0005558317 systemd[1]: Stopping User Manager for UID 0...
Dec 13 02:22:59 np0005558317 systemd[144676]: Activating special unit Exit the Session...
Dec 13 02:22:59 np0005558317 systemd[144676]: Stopped target Main User Target.
Dec 13 02:22:59 np0005558317 systemd[144676]: Stopped target Basic System.
Dec 13 02:22:59 np0005558317 systemd[144676]: Stopped target Paths.
Dec 13 02:22:59 np0005558317 systemd[144676]: Stopped target Sockets.
Dec 13 02:22:59 np0005558317 systemd[144676]: Stopped target Timers.
Dec 13 02:22:59 np0005558317 systemd[144676]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 13 02:22:59 np0005558317 systemd[144676]: Closed D-Bus User Message Bus Socket.
Dec 13 02:22:59 np0005558317 systemd[144676]: Stopped Create User's Volatile Files and Directories.
Dec 13 02:22:59 np0005558317 systemd[144676]: Removed slice User Application Slice.
Dec 13 02:22:59 np0005558317 systemd[144676]: Reached target Shutdown.
Dec 13 02:22:59 np0005558317 systemd[144676]: Finished Exit the Session.
Dec 13 02:22:59 np0005558317 systemd[144676]: Reached target Exit the Session.
Dec 13 02:22:59 np0005558317 systemd[1]: user@0.service: Deactivated successfully.
Dec 13 02:22:59 np0005558317 systemd[1]: Stopped User Manager for UID 0.
Dec 13 02:22:59 np0005558317 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 13 02:22:59 np0005558317 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 13 02:22:59 np0005558317 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 13 02:22:59 np0005558317 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 13 02:22:59 np0005558317 systemd[1]: Removed slice User Slice of UID 0.
Dec 13 02:22:59 np0005558317 python3.9[145855]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:23:00 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:00 np0005558317 python3.9[146008]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:23:01 np0005558317 python3.9[146160]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:23:01 np0005558317 python3.9[146310]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:23:02 np0005558317 python3.9[146462]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 13 02:23:02 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:23:03 np0005558317 python3.9[146612]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:23:03 np0005558317 python3.9[146733]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610582.9269547-86-256388414577141/.source follow=False _original_basename=haproxy.j2 checksum=d225e0e1c34f765c55f17e757e326dba55238d01 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:23:04 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:04 np0005558317 python3.9[146883]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:23:04 np0005558317 python3.9[147004]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610584.0556464-101-203307664463491/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:23:05 np0005558317 python3.9[147156]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:23:06 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:06 np0005558317 python3.9[147243]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:23:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:23:08 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:08 np0005558317 python3.9[147397]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 02:23:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:23:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:23:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:23:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:23:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:23:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:23:09 np0005558317 python3.9[147550]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:23:09 np0005558317 python3.9[147671]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610588.7667036-138-68132395111239/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:23:09 np0005558317 python3.9[147821]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:23:10 np0005558317 python3.9[147942]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610589.6388168-138-114925049156814/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:23:10 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:11 np0005558317 python3.9[148092]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:23:11 np0005558317 python3.9[148213]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610590.8967204-182-17815837219079/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:23:12 np0005558317 python3.9[148363]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:23:12 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:12 np0005558317 python3.9[148484]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610591.6803172-182-37697643639377/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:23:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:23:12 np0005558317 python3.9[148634]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:23:13 np0005558317 python3.9[148788]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:23:13 np0005558317 python3.9[148940]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:23:14 np0005558317 python3.9[149018]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:23:14 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:14 np0005558317 python3.9[149170]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:23:14 np0005558317 python3.9[149248]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:23:15 np0005558317 python3.9[149400]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:23:15 np0005558317 python3.9[149552]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:23:16 np0005558317 python3.9[149630]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:23:16 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:16 np0005558317 python3.9[149782]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:23:17 np0005558317 python3.9[149860]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:23:17 np0005558317 python3.9[150012]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:23:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:23:17 np0005558317 systemd[1]: Reloading.
Dec 13 02:23:17 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:23:17 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:23:18 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:18 np0005558317 python3.9[150201]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:23:18 np0005558317 python3.9[150279]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:23:19 np0005558317 python3.9[150431]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:23:19 np0005558317 ovn_controller[144647]: 2025-12-13T07:23:19Z|00025|memory|INFO|17280 kB peak resident set size after 29.7 seconds
Dec 13 02:23:19 np0005558317 ovn_controller[144647]: 2025-12-13T07:23:19Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Dec 13 02:23:19 np0005558317 podman[150509]: 2025-12-13 07:23:19.625499109 +0000 UTC m=+0.066293443 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:23:19 np0005558317 python3.9[150510]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:23:20 np0005558317 python3.9[150684]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:23:20 np0005558317 systemd[1]: Reloading.
Dec 13 02:23:20 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v397: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:20 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:23:20 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:23:20 np0005558317 systemd[1]: Starting Create netns directory...
Dec 13 02:23:20 np0005558317 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 13 02:23:20 np0005558317 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 13 02:23:20 np0005558317 systemd[1]: Finished Create netns directory.
Dec 13 02:23:21 np0005558317 python3.9[150877]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:23:21 np0005558317 python3.9[151029]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:23:21 np0005558317 python3.9[151152]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610601.249325-333-50749771526750/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:23:22 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v398: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:22 np0005558317 python3.9[151304]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:23:22 np0005558317 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:23:22 np0005558317 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2058 writes, 9119 keys, 2058 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2058 writes, 2058 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2058 writes, 9119 keys, 2058 commit groups, 1.0 writes per commit group, ingest: 12.31 MB, 0.02 MB/s#012Interval WAL: 2058 writes, 2058 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    401.7      0.02              0.02         3    0.007       0      0       0.0       0.0#012  L6      1/0    6.57 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    492.9    428.9      0.03              0.03         2    0.016    7166    731       0.0       0.0#012 Sum      1/0    6.57 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    296.5    418.1      0.05              0.04         5    0.011    7166    731       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    302.5    425.7      0.05              0.04         4    0.013    7166    731       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    492.9    428.9      0.03              0.03         2    0.016    7166    731       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    420.5      0.02              0.02         2    0.010       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     42.7      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.008, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.1 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5642ba289a30#2 capacity: 308.00 MB usage: 690.88 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 2.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(37,603.69 KB,0.191409%) FilterBlock(6,28.30 KB,0.00897197%) IndexBlock(6,58.89 KB,0.0186722%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec 13 02:23:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:23:23 np0005558317 python3.9[151456]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:23:23 np0005558317 python3.9[151579]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610602.7405534-358-280845959330594/.source.json _original_basename=.ufoz9njg follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:23:23 np0005558317 python3.9[151731]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:23:24 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v399: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:25 np0005558317 python3.9[152158]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec 13 02:23:26 np0005558317 python3.9[152310]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 13 02:23:26 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v400: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:26 np0005558317 python3.9[152462]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 13 02:23:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:23:27 np0005558317 python3[152635]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 13 02:23:28 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v401: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:30 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v402: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:32 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v403: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:23:34 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v404: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:36 np0005558317 podman[152646]: 2025-12-13 07:23:36.043046088 +0000 UTC m=+8.114068519 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 02:23:36 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:23:36 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:23:36 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:23:36 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:23:36 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:23:36 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:23:36 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:23:36 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:23:36 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:23:36 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:23:36 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:23:36 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:23:36 np0005558317 podman[152823]: 2025-12-13 07:23:36.174246493 +0000 UTC m=+0.035269186 container create 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:23:36 np0005558317 podman[152823]: 2025-12-13 07:23:36.158613302 +0000 UTC m=+0.019636014 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 02:23:36 np0005558317 python3[152635]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 02:23:36 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v405: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:36 np0005558317 podman[152957]: 2025-12-13 07:23:36.451618472 +0000 UTC m=+0.031459755 container create 6ca649ac93e7c0bdc1ca24e09d251b45cdf080432105b49de02cc6c09a2638cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 02:23:36 np0005558317 systemd[1]: Started libpod-conmon-6ca649ac93e7c0bdc1ca24e09d251b45cdf080432105b49de02cc6c09a2638cf.scope.
Dec 13 02:23:36 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:23:36 np0005558317 podman[152957]: 2025-12-13 07:23:36.505320704 +0000 UTC m=+0.085161997 container init 6ca649ac93e7c0bdc1ca24e09d251b45cdf080432105b49de02cc6c09a2638cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mirzakhani, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:23:36 np0005558317 podman[152957]: 2025-12-13 07:23:36.510592394 +0000 UTC m=+0.090433677 container start 6ca649ac93e7c0bdc1ca24e09d251b45cdf080432105b49de02cc6c09a2638cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mirzakhani, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 02:23:36 np0005558317 podman[152957]: 2025-12-13 07:23:36.513017179 +0000 UTC m=+0.092858482 container attach 6ca649ac93e7c0bdc1ca24e09d251b45cdf080432105b49de02cc6c09a2638cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mirzakhani, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:23:36 np0005558317 great_mirzakhani[152999]: 167 167
Dec 13 02:23:36 np0005558317 systemd[1]: libpod-6ca649ac93e7c0bdc1ca24e09d251b45cdf080432105b49de02cc6c09a2638cf.scope: Deactivated successfully.
Dec 13 02:23:36 np0005558317 podman[152957]: 2025-12-13 07:23:36.519185259 +0000 UTC m=+0.099026542 container died 6ca649ac93e7c0bdc1ca24e09d251b45cdf080432105b49de02cc6c09a2638cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mirzakhani, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 02:23:36 np0005558317 systemd[1]: var-lib-containers-storage-overlay-a407f90e8c3fd63a8477ee2e8aeba81a8835e01306fae3d93e9b6bcc1f283ed1-merged.mount: Deactivated successfully.
Dec 13 02:23:36 np0005558317 podman[152957]: 2025-12-13 07:23:36.439369854 +0000 UTC m=+0.019211157 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:23:36 np0005558317 podman[152957]: 2025-12-13 07:23:36.543308262 +0000 UTC m=+0.123149545 container remove 6ca649ac93e7c0bdc1ca24e09d251b45cdf080432105b49de02cc6c09a2638cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 02:23:36 np0005558317 systemd[1]: libpod-conmon-6ca649ac93e7c0bdc1ca24e09d251b45cdf080432105b49de02cc6c09a2638cf.scope: Deactivated successfully.
Dec 13 02:23:36 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:23:36 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:23:36 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:23:36 np0005558317 podman[153097]: 2025-12-13 07:23:36.67125046 +0000 UTC m=+0.032010588 container create 5ead1753c8904d8eae493e74b82303095cffcb2dc96c4a0630c360ee11be0f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hopper, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:23:36 np0005558317 systemd[1]: Started libpod-conmon-5ead1753c8904d8eae493e74b82303095cffcb2dc96c4a0630c360ee11be0f57.scope.
Dec 13 02:23:36 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:23:36 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc99e508cec3f779af9e1b192067cc645d0ee5b4457f0b03714153a5de02a4a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:23:36 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc99e508cec3f779af9e1b192067cc645d0ee5b4457f0b03714153a5de02a4a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:23:36 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc99e508cec3f779af9e1b192067cc645d0ee5b4457f0b03714153a5de02a4a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:23:36 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc99e508cec3f779af9e1b192067cc645d0ee5b4457f0b03714153a5de02a4a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:23:36 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc99e508cec3f779af9e1b192067cc645d0ee5b4457f0b03714153a5de02a4a6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:23:36 np0005558317 podman[153097]: 2025-12-13 07:23:36.729216353 +0000 UTC m=+0.089976511 container init 5ead1753c8904d8eae493e74b82303095cffcb2dc96c4a0630c360ee11be0f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 02:23:36 np0005558317 podman[153097]: 2025-12-13 07:23:36.734914272 +0000 UTC m=+0.095674410 container start 5ead1753c8904d8eae493e74b82303095cffcb2dc96c4a0630c360ee11be0f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:23:36 np0005558317 podman[153097]: 2025-12-13 07:23:36.737925206 +0000 UTC m=+0.098685343 container attach 5ead1753c8904d8eae493e74b82303095cffcb2dc96c4a0630c360ee11be0f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 02:23:36 np0005558317 podman[153097]: 2025-12-13 07:23:36.658107898 +0000 UTC m=+0.018868035 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:23:36 np0005558317 python3.9[153096]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:23:37 np0005558317 agitated_hopper[153110]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:23:37 np0005558317 agitated_hopper[153110]: --> All data devices are unavailable
Dec 13 02:23:37 np0005558317 systemd[1]: libpod-5ead1753c8904d8eae493e74b82303095cffcb2dc96c4a0630c360ee11be0f57.scope: Deactivated successfully.
Dec 13 02:23:37 np0005558317 podman[153097]: 2025-12-13 07:23:37.146865089 +0000 UTC m=+0.507625236 container died 5ead1753c8904d8eae493e74b82303095cffcb2dc96c4a0630c360ee11be0f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hopper, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 02:23:37 np0005558317 systemd[1]: var-lib-containers-storage-overlay-fc99e508cec3f779af9e1b192067cc645d0ee5b4457f0b03714153a5de02a4a6-merged.mount: Deactivated successfully.
Dec 13 02:23:37 np0005558317 podman[153097]: 2025-12-13 07:23:37.170239278 +0000 UTC m=+0.530999414 container remove 5ead1753c8904d8eae493e74b82303095cffcb2dc96c4a0630c360ee11be0f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hopper, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:23:37 np0005558317 systemd[1]: libpod-conmon-5ead1753c8904d8eae493e74b82303095cffcb2dc96c4a0630c360ee11be0f57.scope: Deactivated successfully.
Dec 13 02:23:37 np0005558317 python3.9[153292]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:23:37 np0005558317 podman[153404]: 2025-12-13 07:23:37.527689916 +0000 UTC m=+0.029872760 container create 3ce948461d85e3c39cf801f8605e850fab1f8915813748ec659bd05b873a2428 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rubin, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:23:37 np0005558317 systemd[1]: Started libpod-conmon-3ce948461d85e3c39cf801f8605e850fab1f8915813748ec659bd05b873a2428.scope.
Dec 13 02:23:37 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:23:37 np0005558317 podman[153404]: 2025-12-13 07:23:37.580328734 +0000 UTC m=+0.082511609 container init 3ce948461d85e3c39cf801f8605e850fab1f8915813748ec659bd05b873a2428 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:23:37 np0005558317 podman[153404]: 2025-12-13 07:23:37.585491941 +0000 UTC m=+0.087674796 container start 3ce948461d85e3c39cf801f8605e850fab1f8915813748ec659bd05b873a2428 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rubin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 02:23:37 np0005558317 podman[153404]: 2025-12-13 07:23:37.58732142 +0000 UTC m=+0.089504275 container attach 3ce948461d85e3c39cf801f8605e850fab1f8915813748ec659bd05b873a2428 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rubin, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 02:23:37 np0005558317 zen_rubin[153442]: 167 167
Dec 13 02:23:37 np0005558317 systemd[1]: libpod-3ce948461d85e3c39cf801f8605e850fab1f8915813748ec659bd05b873a2428.scope: Deactivated successfully.
Dec 13 02:23:37 np0005558317 podman[153404]: 2025-12-13 07:23:37.589854457 +0000 UTC m=+0.092037313 container died 3ce948461d85e3c39cf801f8605e850fab1f8915813748ec659bd05b873a2428 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:23:37 np0005558317 systemd[1]: var-lib-containers-storage-overlay-8d2d8ece24fee494c43349d6d723082d6de89c4084429be2a4948aa94fb6b22a-merged.mount: Deactivated successfully.
Dec 13 02:23:37 np0005558317 podman[153404]: 2025-12-13 07:23:37.515232657 +0000 UTC m=+0.017415532 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:23:37 np0005558317 podman[153404]: 2025-12-13 07:23:37.613922287 +0000 UTC m=+0.116105142 container remove 3ce948461d85e3c39cf801f8605e850fab1f8915813748ec659bd05b873a2428 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rubin, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:23:37 np0005558317 systemd[1]: libpod-conmon-3ce948461d85e3c39cf801f8605e850fab1f8915813748ec659bd05b873a2428.scope: Deactivated successfully.
Dec 13 02:23:37 np0005558317 python3.9[153439]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:23:37 np0005558317 podman[153464]: 2025-12-13 07:23:37.743493127 +0000 UTC m=+0.032767055 container create bcce2f4996edd48540019d060d5ec3afc9db84005d1a4dcf587db1ef5423baa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_williams, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:23:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:23:37 np0005558317 systemd[1]: Started libpod-conmon-bcce2f4996edd48540019d060d5ec3afc9db84005d1a4dcf587db1ef5423baa2.scope.
Dec 13 02:23:37 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:23:37 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f78ba06436d8488e9b400dcf0c5d6ad976f2a88294b7ea1aa42acd73c015653/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:23:37 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f78ba06436d8488e9b400dcf0c5d6ad976f2a88294b7ea1aa42acd73c015653/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:23:37 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f78ba06436d8488e9b400dcf0c5d6ad976f2a88294b7ea1aa42acd73c015653/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:23:37 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f78ba06436d8488e9b400dcf0c5d6ad976f2a88294b7ea1aa42acd73c015653/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:23:37 np0005558317 podman[153464]: 2025-12-13 07:23:37.799085812 +0000 UTC m=+0.088359761 container init bcce2f4996edd48540019d060d5ec3afc9db84005d1a4dcf587db1ef5423baa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_williams, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 02:23:37 np0005558317 podman[153464]: 2025-12-13 07:23:37.806245853 +0000 UTC m=+0.095519781 container start bcce2f4996edd48540019d060d5ec3afc9db84005d1a4dcf587db1ef5423baa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_williams, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 02:23:37 np0005558317 podman[153464]: 2025-12-13 07:23:37.808734908 +0000 UTC m=+0.098008856 container attach bcce2f4996edd48540019d060d5ec3afc9db84005d1a4dcf587db1ef5423baa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:23:37 np0005558317 podman[153464]: 2025-12-13 07:23:37.729805042 +0000 UTC m=+0.019078990 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:23:38 np0005558317 awesome_williams[153500]: {
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:    "0": [
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:        {
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "devices": [
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "/dev/loop3"
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            ],
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "lv_name": "ceph_lv0",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "lv_size": "21470642176",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "name": "ceph_lv0",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "tags": {
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.cluster_name": "ceph",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.crush_device_class": "",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.encrypted": "0",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.objectstore": "bluestore",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.osd_id": "0",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.type": "block",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.vdo": "0",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.with_tpm": "0"
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            },
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "type": "block",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "vg_name": "ceph_vg0"
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:        }
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:    ],
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:    "1": [
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:        {
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "devices": [
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "/dev/loop4"
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            ],
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "lv_name": "ceph_lv1",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "lv_size": "21470642176",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "name": "ceph_lv1",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "tags": {
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.cluster_name": "ceph",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.crush_device_class": "",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.encrypted": "0",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.objectstore": "bluestore",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.osd_id": "1",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.type": "block",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.vdo": "0",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.with_tpm": "0"
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            },
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "type": "block",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "vg_name": "ceph_vg1"
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:        }
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:    ],
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:    "2": [
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:        {
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "devices": [
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "/dev/loop5"
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            ],
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "lv_name": "ceph_lv2",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "lv_size": "21470642176",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "name": "ceph_lv2",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "tags": {
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.cluster_name": "ceph",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.crush_device_class": "",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.encrypted": "0",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.objectstore": "bluestore",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.osd_id": "2",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.type": "block",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.vdo": "0",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:                "ceph.with_tpm": "0"
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            },
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "type": "block",
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:            "vg_name": "ceph_vg2"
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:        }
Dec 13 02:23:38 np0005558317 awesome_williams[153500]:    ]
Dec 13 02:23:38 np0005558317 awesome_williams[153500]: }
Dec 13 02:23:38 np0005558317 systemd[1]: libpod-bcce2f4996edd48540019d060d5ec3afc9db84005d1a4dcf587db1ef5423baa2.scope: Deactivated successfully.
Dec 13 02:23:38 np0005558317 podman[153464]: 2025-12-13 07:23:38.070784308 +0000 UTC m=+0.360058246 container died bcce2f4996edd48540019d060d5ec3afc9db84005d1a4dcf587db1ef5423baa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_williams, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:23:38 np0005558317 systemd[1]: var-lib-containers-storage-overlay-9f78ba06436d8488e9b400dcf0c5d6ad976f2a88294b7ea1aa42acd73c015653-merged.mount: Deactivated successfully.
Dec 13 02:23:38 np0005558317 podman[153464]: 2025-12-13 07:23:38.094159559 +0000 UTC m=+0.383433488 container remove bcce2f4996edd48540019d060d5ec3afc9db84005d1a4dcf587db1ef5423baa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_williams, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:23:38 np0005558317 systemd[1]: libpod-conmon-bcce2f4996edd48540019d060d5ec3afc9db84005d1a4dcf587db1ef5423baa2.scope: Deactivated successfully.
Dec 13 02:23:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:23:38
Dec 13 02:23:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:23:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 02:23:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] pools ['volumes', 'images', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'vms', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'default.rgw.control']
Dec 13 02:23:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:23:38 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v406: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:38 np0005558317 python3.9[153697]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765610617.773155-446-281226908936130/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:23:38 np0005558317 podman[153708]: 2025-12-13 07:23:38.439539833 +0000 UTC m=+0.030321932 container create 9279eaea325b9c0f6d230166641b85bbb16210ec883f5052e9737af03579af69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 02:23:38 np0005558317 systemd[1]: Started libpod-conmon-9279eaea325b9c0f6d230166641b85bbb16210ec883f5052e9737af03579af69.scope.
Dec 13 02:23:38 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:23:38 np0005558317 podman[153708]: 2025-12-13 07:23:38.488074972 +0000 UTC m=+0.078857081 container init 9279eaea325b9c0f6d230166641b85bbb16210ec883f5052e9737af03579af69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 02:23:38 np0005558317 podman[153708]: 2025-12-13 07:23:38.493317727 +0000 UTC m=+0.084099817 container start 9279eaea325b9c0f6d230166641b85bbb16210ec883f5052e9737af03579af69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_gauss, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 02:23:38 np0005558317 podman[153708]: 2025-12-13 07:23:38.494513389 +0000 UTC m=+0.085295478 container attach 9279eaea325b9c0f6d230166641b85bbb16210ec883f5052e9737af03579af69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 02:23:38 np0005558317 elegant_gauss[153741]: 167 167
Dec 13 02:23:38 np0005558317 systemd[1]: libpod-9279eaea325b9c0f6d230166641b85bbb16210ec883f5052e9737af03579af69.scope: Deactivated successfully.
Dec 13 02:23:38 np0005558317 podman[153708]: 2025-12-13 07:23:38.497264805 +0000 UTC m=+0.088046895 container died 9279eaea325b9c0f6d230166641b85bbb16210ec883f5052e9737af03579af69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_gauss, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 02:23:38 np0005558317 podman[153708]: 2025-12-13 07:23:38.514907723 +0000 UTC m=+0.105689812 container remove 9279eaea325b9c0f6d230166641b85bbb16210ec883f5052e9737af03579af69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_gauss, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 02:23:38 np0005558317 podman[153708]: 2025-12-13 07:23:38.428360069 +0000 UTC m=+0.019142178 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:23:38 np0005558317 systemd[1]: var-lib-containers-storage-overlay-9089c5be8189707c49577812b72d6f731277245db29a8db619f64417fd0336c8-merged.mount: Deactivated successfully.
Dec 13 02:23:38 np0005558317 systemd[1]: libpod-conmon-9279eaea325b9c0f6d230166641b85bbb16210ec883f5052e9737af03579af69.scope: Deactivated successfully.
Dec 13 02:23:38 np0005558317 podman[153820]: 2025-12-13 07:23:38.644281384 +0000 UTC m=+0.028695773 container create 664f92f46d50855f365cf5239ac99d63653c340284dbd79bf6456771b39a3651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatelet, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:23:38 np0005558317 systemd[1]: Started libpod-conmon-664f92f46d50855f365cf5239ac99d63653c340284dbd79bf6456771b39a3651.scope.
Dec 13 02:23:38 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:23:38 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b647745b8acc763a82680e4b4bd27ab0f1fa67855cefd712ac5adee41617ce2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:23:38 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b647745b8acc763a82680e4b4bd27ab0f1fa67855cefd712ac5adee41617ce2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:23:38 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b647745b8acc763a82680e4b4bd27ab0f1fa67855cefd712ac5adee41617ce2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:23:38 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b647745b8acc763a82680e4b4bd27ab0f1fa67855cefd712ac5adee41617ce2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:23:38 np0005558317 podman[153820]: 2025-12-13 07:23:38.713262833 +0000 UTC m=+0.097677234 container init 664f92f46d50855f365cf5239ac99d63653c340284dbd79bf6456771b39a3651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Dec 13 02:23:38 np0005558317 podman[153820]: 2025-12-13 07:23:38.717756215 +0000 UTC m=+0.102170604 container start 664f92f46d50855f365cf5239ac99d63653c340284dbd79bf6456771b39a3651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatelet, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:23:38 np0005558317 podman[153820]: 2025-12-13 07:23:38.719474264 +0000 UTC m=+0.103888655 container attach 664f92f46d50855f365cf5239ac99d63653c340284dbd79bf6456771b39a3651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 02:23:38 np0005558317 podman[153820]: 2025-12-13 07:23:38.633070732 +0000 UTC m=+0.017485142 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:23:38 np0005558317 python3.9[153822]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 02:23:38 np0005558317 systemd[1]: Reloading.
Dec 13 02:23:38 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:23:38 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:23:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:23:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:23:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:23:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:23:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:23:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:23:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:23:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:23:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:23:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:23:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:23:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:23:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:23:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:23:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:23:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:23:39 np0005558317 lvm[153995]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:23:39 np0005558317 lvm[153996]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:23:39 np0005558317 lvm[153993]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:23:39 np0005558317 lvm[153996]: VG ceph_vg2 finished
Dec 13 02:23:39 np0005558317 lvm[153995]: VG ceph_vg1 finished
Dec 13 02:23:39 np0005558317 lvm[153993]: VG ceph_vg0 finished
Dec 13 02:23:39 np0005558317 nice_chatelet[153834]: {}
Dec 13 02:23:39 np0005558317 systemd[1]: libpod-664f92f46d50855f365cf5239ac99d63653c340284dbd79bf6456771b39a3651.scope: Deactivated successfully.
Dec 13 02:23:39 np0005558317 podman[153820]: 2025-12-13 07:23:39.308877162 +0000 UTC m=+0.693291562 container died 664f92f46d50855f365cf5239ac99d63653c340284dbd79bf6456771b39a3651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatelet, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 02:23:39 np0005558317 systemd[1]: var-lib-containers-storage-overlay-9b647745b8acc763a82680e4b4bd27ab0f1fa67855cefd712ac5adee41617ce2-merged.mount: Deactivated successfully.
Dec 13 02:23:39 np0005558317 podman[153820]: 2025-12-13 07:23:39.334346348 +0000 UTC m=+0.718760738 container remove 664f92f46d50855f365cf5239ac99d63653c340284dbd79bf6456771b39a3651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 02:23:39 np0005558317 systemd[1]: libpod-conmon-664f92f46d50855f365cf5239ac99d63653c340284dbd79bf6456771b39a3651.scope: Deactivated successfully.
Dec 13 02:23:39 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:23:39 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:23:39 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:23:39 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:23:39 np0005558317 python3.9[154036]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:23:39 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:23:39 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:23:39 np0005558317 systemd[1]: Reloading.
Dec 13 02:23:39 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:23:39 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:23:39 np0005558317 systemd[1]: Starting ovn_metadata_agent container...
Dec 13 02:23:39 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:23:39 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ac2e3cc0f49fbd08a64ac89f3699fdf738171896df38043320a4a42d495566/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 13 02:23:39 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ac2e3cc0f49fbd08a64ac89f3699fdf738171896df38043320a4a42d495566/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 02:23:39 np0005558317 systemd[1]: Started /usr/bin/podman healthcheck run 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07.
Dec 13 02:23:39 np0005558317 podman[154103]: 2025-12-13 07:23:39.966152081 +0000 UTC m=+0.082438884 container init 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 13 02:23:39 np0005558317 ovn_metadata_agent[154116]: + sudo -E kolla_set_configs
Dec 13 02:23:39 np0005558317 podman[154103]: 2025-12-13 07:23:39.993206648 +0000 UTC m=+0.109493452 container start 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 02:23:39 np0005558317 edpm-start-podman-container[154103]: ovn_metadata_agent
Dec 13 02:23:40 np0005558317 ovn_metadata_agent[154116]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 13 02:23:40 np0005558317 ovn_metadata_agent[154116]: INFO:__main__:Validating config file
Dec 13 02:23:40 np0005558317 ovn_metadata_agent[154116]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 13 02:23:40 np0005558317 ovn_metadata_agent[154116]: INFO:__main__:Copying service configuration files
Dec 13 02:23:40 np0005558317 ovn_metadata_agent[154116]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 13 02:23:40 np0005558317 ovn_metadata_agent[154116]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 13 02:23:40 np0005558317 ovn_metadata_agent[154116]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 13 02:23:40 np0005558317 ovn_metadata_agent[154116]: INFO:__main__:Writing out command to execute
Dec 13 02:23:40 np0005558317 ovn_metadata_agent[154116]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 13 02:23:40 np0005558317 ovn_metadata_agent[154116]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 13 02:23:40 np0005558317 ovn_metadata_agent[154116]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 13 02:23:40 np0005558317 ovn_metadata_agent[154116]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 13 02:23:40 np0005558317 ovn_metadata_agent[154116]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 13 02:23:40 np0005558317 ovn_metadata_agent[154116]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 13 02:23:40 np0005558317 ovn_metadata_agent[154116]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 13 02:23:40 np0005558317 podman[154122]: 2025-12-13 07:23:40.046089895 +0000 UTC m=+0.045306809 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 02:23:40 np0005558317 edpm-start-podman-container[154102]: Creating additional drop-in dependency for "ovn_metadata_agent" (1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07)
Dec 13 02:23:40 np0005558317 ovn_metadata_agent[154116]: ++ cat /run_command
Dec 13 02:23:40 np0005558317 ovn_metadata_agent[154116]: + CMD=neutron-ovn-metadata-agent
Dec 13 02:23:40 np0005558317 ovn_metadata_agent[154116]: + ARGS=
Dec 13 02:23:40 np0005558317 ovn_metadata_agent[154116]: + sudo kolla_copy_cacerts
Dec 13 02:23:40 np0005558317 systemd[1]: Reloading.
Dec 13 02:23:40 np0005558317 ovn_metadata_agent[154116]: Running command: 'neutron-ovn-metadata-agent'
Dec 13 02:23:40 np0005558317 ovn_metadata_agent[154116]: + [[ ! -n '' ]]
Dec 13 02:23:40 np0005558317 ovn_metadata_agent[154116]: + . kolla_extend_start
Dec 13 02:23:40 np0005558317 ovn_metadata_agent[154116]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec 13 02:23:40 np0005558317 ovn_metadata_agent[154116]: + umask 0022
Dec 13 02:23:40 np0005558317 ovn_metadata_agent[154116]: + exec neutron-ovn-metadata-agent
Dec 13 02:23:40 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:23:40 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:23:40 np0005558317 systemd[1]: Started ovn_metadata_agent container.
Dec 13 02:23:40 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:40 np0005558317 systemd[1]: session-50.scope: Deactivated successfully.
Dec 13 02:23:40 np0005558317 systemd[1]: session-50.scope: Consumed 41.260s CPU time.
Dec 13 02:23:40 np0005558317 systemd-logind[745]: Session 50 logged out. Waiting for processes to exit.
Dec 13 02:23:40 np0005558317 systemd-logind[745]: Removed session 50.
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.600 154121 INFO neutron.common.config [-] Logging enabled!#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.600 154121 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.600 154121 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.601 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.601 154121 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.601 154121 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.601 154121 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.601 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.601 154121 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.602 154121 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.602 154121 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.602 154121 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.602 154121 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.602 154121 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.602 154121 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.602 154121 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.602 154121 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.602 154121 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.602 154121 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.603 154121 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.603 154121 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.603 154121 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.603 154121 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.603 154121 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.603 154121 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.603 154121 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.603 154121 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.603 154121 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.603 154121 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.604 154121 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.604 154121 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.604 154121 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.604 154121 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.604 154121 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.604 154121 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.604 154121 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.604 154121 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.604 154121 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.604 154121 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.605 154121 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.605 154121 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.605 154121 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.605 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.605 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.605 154121 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.605 154121 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.605 154121 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.605 154121 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.605 154121 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.606 154121 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.606 154121 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.606 154121 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.606 154121 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.606 154121 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.606 154121 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.606 154121 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.606 154121 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.606 154121 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.606 154121 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.606 154121 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.607 154121 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.607 154121 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.607 154121 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.607 154121 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.607 154121 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.607 154121 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.607 154121 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.607 154121 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.607 154121 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.607 154121 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.608 154121 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.608 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.608 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.608 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.608 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.608 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.608 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.608 154121 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.608 154121 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.609 154121 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.609 154121 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.609 154121 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.609 154121 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.609 154121 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.609 154121 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.609 154121 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.609 154121 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.609 154121 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.609 154121 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.610 154121 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.610 154121 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.610 154121 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.610 154121 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.610 154121 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.610 154121 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.610 154121 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.610 154121 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.610 154121 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.610 154121 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.610 154121 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.610 154121 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.611 154121 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.611 154121 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.611 154121 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.611 154121 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.611 154121 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.611 154121 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.611 154121 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.611 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.611 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.611 154121 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.612 154121 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.612 154121 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.612 154121 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.612 154121 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.612 154121 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.612 154121 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.612 154121 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.612 154121 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.612 154121 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.613 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.613 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.613 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.613 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.613 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.613 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.613 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.614 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.614 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.614 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.614 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.614 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.614 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.614 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.614 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.614 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.615 154121 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.615 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.615 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.615 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.615 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.615 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.615 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.615 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.615 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.615 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.616 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.616 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.616 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.616 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.616 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.616 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.616 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.616 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.616 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.617 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.617 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.617 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.617 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.617 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.617 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.617 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.617 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.617 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.617 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.618 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.618 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.618 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.618 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.618 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.618 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.618 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.618 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.618 154121 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.619 154121 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.619 154121 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.619 154121 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.619 154121 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.619 154121 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.619 154121 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.619 154121 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.619 154121 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.619 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.620 154121 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.620 154121 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.620 154121 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.620 154121 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.620 154121 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.620 154121 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.620 154121 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.620 154121 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.620 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.620 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.621 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.621 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.621 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.621 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.621 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.621 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.621 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.621 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.621 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.621 154121 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.622 154121 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.622 154121 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.622 154121 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.622 154121 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.622 154121 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.622 154121 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.622 154121 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.622 154121 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.622 154121 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.622 154121 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.623 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.623 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.623 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.623 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.623 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.623 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.623 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.623 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.623 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.623 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.624 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.624 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.624 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.624 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.624 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.624 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.624 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.624 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.624 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.624 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.625 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.625 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.625 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.625 154121 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.625 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.625 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.625 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.625 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.625 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.626 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.626 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.626 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.626 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.626 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.626 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.626 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.626 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.626 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.627 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.627 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.627 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.627 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.627 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.627 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.627 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.627 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.627 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.627 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.628 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.628 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.628 154121 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.628 154121 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.628 154121 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.628 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.628 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.628 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.628 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.629 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.629 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.629 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.629 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.629 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.629 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.629 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.629 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.629 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.629 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.630 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.630 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.630 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.630 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.630 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.630 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.630 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.630 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.630 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.631 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.631 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.631 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.631 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.631 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.631 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.631 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.631 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.631 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.631 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.632 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.632 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.632 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.632 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.632 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.639 154121 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.639 154121 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.639 154121 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.639 154121 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.640 154121 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.650 154121 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 075cc82e-193d-47f2-a248-9917472f5475 (UUID: 075cc82e-193d-47f2-a248-9917472f5475) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.668 154121 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.668 154121 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.668 154121 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.668 154121 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.670 154121 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.676 154121 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.679 154121 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '075cc82e-193d-47f2-a248-9917472f5475'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f240d193b80>], external_ids={}, name=075cc82e-193d-47f2-a248-9917472f5475, nb_cfg_timestamp=1765610577914, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.680 154121 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f240d116fd0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.681 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.681 154121 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.681 154121 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.681 154121 INFO oslo_service.service [-] Starting 1 workers#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.685 154121 DEBUG oslo_service.service [-] Started child 154224 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.687 154121 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpj55srbhp/privsep.sock']#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.688 154224 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-170709'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.704 154224 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.704 154224 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.704 154224 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.706 154224 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.712 154224 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec 13 02:23:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.715 154224 INFO eventlet.wsgi.server [-] (154224) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Dec 13 02:23:42 np0005558317 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec 13 02:23:42 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:42.211 154121 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec 13 02:23:42 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:42.212 154121 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpj55srbhp/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec 13 02:23:42 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:42.133 154229 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec 13 02:23:42 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:42.136 154229 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec 13 02:23:42 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:42.138 154229 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Dec 13 02:23:42 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:42.138 154229 INFO oslo.privsep.daemon [-] privsep daemon running as pid 154229#033[00m
Dec 13 02:23:42 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:42.215 154229 DEBUG oslo.privsep.daemon [-] privsep: reply[732fe205-1cdd-4b5d-b004-6269471f91be]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 02:23:42 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:42 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:42.633 154229 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:23:42 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:42.633 154229 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:23:42 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:42.633 154229 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:23:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.083 154229 DEBUG oslo.privsep.daemon [-] privsep: reply[e4e69915-e524-495e-8d2f-7aec7dd240f0]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.085 154121 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=075cc82e-193d-47f2-a248-9917472f5475, column=external_ids, values=({'neutron:ovn-metadata-id': 'ab55531c-472b-5a5c-8fef-f07849a1dd3d'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.092 154121 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=075cc82e-193d-47f2-a248-9917472f5475, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.098 154121 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.099 154121 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.099 154121 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.099 154121 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.099 154121 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.099 154121 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.099 154121 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.099 154121 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.099 154121 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.099 154121 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.099 154121 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.100 154121 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.100 154121 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.100 154121 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.100 154121 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.100 154121 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.100 154121 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.100 154121 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.100 154121 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.100 154121 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.101 154121 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.101 154121 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.101 154121 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.101 154121 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.101 154121 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.101 154121 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.101 154121 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.101 154121 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.102 154121 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.102 154121 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.102 154121 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.102 154121 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.102 154121 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.102 154121 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.102 154121 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.102 154121 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.102 154121 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.103 154121 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.103 154121 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.103 154121 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.103 154121 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.103 154121 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.103 154121 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.103 154121 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.103 154121 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.104 154121 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.104 154121 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.104 154121 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.104 154121 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.104 154121 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.104 154121 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.104 154121 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.104 154121 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.104 154121 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.104 154121 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.104 154121 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.105 154121 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.105 154121 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.105 154121 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.105 154121 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.105 154121 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.105 154121 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.105 154121 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.105 154121 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.105 154121 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.105 154121 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.106 154121 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.106 154121 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.106 154121 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.106 154121 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.106 154121 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.106 154121 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.106 154121 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.106 154121 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.106 154121 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.107 154121 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.107 154121 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.107 154121 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.107 154121 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.107 154121 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.107 154121 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.107 154121 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.107 154121 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.107 154121 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.107 154121 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.108 154121 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.108 154121 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.108 154121 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.108 154121 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.108 154121 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.108 154121 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.108 154121 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.108 154121 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.108 154121 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.108 154121 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.108 154121 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.109 154121 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.109 154121 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.109 154121 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.109 154121 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.109 154121 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.109 154121 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.109 154121 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.109 154121 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.109 154121 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.109 154121 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.110 154121 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.110 154121 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.110 154121 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.110 154121 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.110 154121 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.110 154121 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.110 154121 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.110 154121 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.111 154121 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.111 154121 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.111 154121 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.111 154121 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.111 154121 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.111 154121 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.111 154121 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.111 154121 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.111 154121 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.112 154121 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.112 154121 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.112 154121 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.112 154121 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.112 154121 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.112 154121 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.112 154121 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.112 154121 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.112 154121 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.112 154121 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.113 154121 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.113 154121 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.113 154121 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.113 154121 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.113 154121 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.113 154121 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.113 154121 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.113 154121 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.113 154121 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.114 154121 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.114 154121 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.114 154121 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.114 154121 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.114 154121 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.114 154121 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.114 154121 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.114 154121 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.114 154121 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.114 154121 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.114 154121 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.115 154121 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.115 154121 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.115 154121 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.115 154121 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.115 154121 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.115 154121 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.115 154121 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.115 154121 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.115 154121 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.115 154121 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.115 154121 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.116 154121 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.116 154121 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.116 154121 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.116 154121 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.116 154121 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.116 154121 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.116 154121 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.116 154121 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.116 154121 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.116 154121 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.117 154121 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.117 154121 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.117 154121 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.117 154121 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.117 154121 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.117 154121 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.117 154121 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.117 154121 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.117 154121 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.117 154121 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.118 154121 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.118 154121 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.118 154121 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.118 154121 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.118 154121 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.118 154121 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.118 154121 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.118 154121 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.118 154121 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.119 154121 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.119 154121 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.119 154121 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.119 154121 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.119 154121 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.119 154121 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.119 154121 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.119 154121 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.119 154121 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.119 154121 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.119 154121 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.120 154121 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.120 154121 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.120 154121 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.120 154121 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.120 154121 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.120 154121 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.120 154121 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.120 154121 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.120 154121 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.120 154121 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.121 154121 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.121 154121 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.121 154121 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.121 154121 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.121 154121 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.121 154121 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.121 154121 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.121 154121 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.121 154121 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.121 154121 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.121 154121 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.122 154121 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.122 154121 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.122 154121 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.122 154121 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.122 154121 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.122 154121 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.122 154121 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.122 154121 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.122 154121 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.122 154121 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.122 154121 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.123 154121 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.123 154121 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.123 154121 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.123 154121 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.123 154121 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.123 154121 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.123 154121 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.123 154121 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.123 154121 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.124 154121 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.124 154121 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.124 154121 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.124 154121 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.124 154121 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.124 154121 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.124 154121 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.124 154121 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.124 154121 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.124 154121 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.124 154121 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.125 154121 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.125 154121 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.125 154121 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.125 154121 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.125 154121 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.125 154121 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.125 154121 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.125 154121 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.125 154121 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.125 154121 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.126 154121 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.126 154121 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.126 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.126 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.126 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.126 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.126 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.126 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.126 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.127 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.127 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.127 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.127 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.127 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.127 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.127 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.127 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.127 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.127 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.127 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.128 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.128 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.128 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.128 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.128 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.128 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.128 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.128 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.128 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.128 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.129 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.129 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.129 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.129 154121 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.129 154121 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.129 154121 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.129 154121 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:23:43 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.129 154121 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec 13 02:23:44 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.622424) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610624622488, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 867, "num_deletes": 251, "total_data_size": 1215543, "memory_usage": 1240832, "flush_reason": "Manual Compaction"}
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610624628092, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1193835, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8823, "largest_seqno": 9689, "table_properties": {"data_size": 1189495, "index_size": 1992, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9145, "raw_average_key_size": 18, "raw_value_size": 1180827, "raw_average_value_size": 2414, "num_data_blocks": 93, "num_entries": 489, "num_filter_entries": 489, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610548, "oldest_key_time": 1765610548, "file_creation_time": 1765610624, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 5661 microseconds, and 4526 cpu microseconds.
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.628121) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1193835 bytes OK
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.628133) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.628478) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.628488) EVENT_LOG_v1 {"time_micros": 1765610624628486, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.628501) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1211287, prev total WAL file size 1211287, number of live WAL files 2.
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.628933) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1165KB)], [23(6730KB)]
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610624628986, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8086001, "oldest_snapshot_seqno": -1}
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3292 keys, 6300978 bytes, temperature: kUnknown
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610624643865, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6300978, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6277029, "index_size": 14624, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 79784, "raw_average_key_size": 24, "raw_value_size": 6215505, "raw_average_value_size": 1888, "num_data_blocks": 639, "num_entries": 3292, "num_filter_entries": 3292, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610001, "oldest_key_time": 0, "file_creation_time": 1765610624, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.644005) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6300978 bytes
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.644374) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 541.6 rd, 422.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 6.6 +0.0 blob) out(6.0 +0.0 blob), read-write-amplify(12.1) write-amplify(5.3) OK, records in: 3806, records dropped: 514 output_compression: NoCompression
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.644386) EVENT_LOG_v1 {"time_micros": 1765610624644381, "job": 8, "event": "compaction_finished", "compaction_time_micros": 14930, "compaction_time_cpu_micros": 11098, "output_level": 6, "num_output_files": 1, "total_output_size": 6300978, "num_input_records": 3806, "num_output_records": 3292, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610624644726, "job": 8, "event": "table_file_deletion", "file_number": 25}
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610624645771, "job": 8, "event": "table_file_deletion", "file_number": 23}
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.628832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.645818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.645820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.645821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.645822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:23:44 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.645823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:23:45 np0005558317 systemd-logind[745]: New session 51 of user zuul.
Dec 13 02:23:45 np0005558317 systemd[1]: Started Session 51 of User zuul.
Dec 13 02:23:46 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:46 np0005558317 python3.9[154387]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:23:47 np0005558317 python3.9[154543]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:23:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:23:48 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:23:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:23:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:23:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:23:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:23:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:23:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:23:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:23:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:23:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:23:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:23:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:23:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 02:23:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:23:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:23:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:23:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:23:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:23:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:23:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:23:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:23:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:23:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:23:48 np0005558317 python3.9[154705]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 02:23:48 np0005558317 systemd[1]: Reloading.
Dec 13 02:23:48 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:23:48 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:23:49 np0005558317 python3.9[154890]: ansible-ansible.builtin.service_facts Invoked
Dec 13 02:23:49 np0005558317 network[154907]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 02:23:49 np0005558317 network[154908]: 'network-scripts' will be removed from distribution in near future.
Dec 13 02:23:49 np0005558317 network[154909]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 02:23:49 np0005558317 podman[154915]: 2025-12-13 07:23:49.979052092 +0000 UTC m=+0.069482720 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:23:50 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:51 np0005558317 python3.9[155194]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:23:52 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:52 np0005558317 python3.9[155347]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:23:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:23:52 np0005558317 python3.9[155500]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:23:53 np0005558317 python3.9[155653]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:23:54 np0005558317 python3.9[155806]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:23:54 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:54 np0005558317 python3.9[155959]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:23:55 np0005558317 python3.9[156112]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:23:55 np0005558317 python3.9[156265]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:23:56 np0005558317 python3.9[156417]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:23:56 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:56 np0005558317 python3.9[156569]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:23:57 np0005558317 python3.9[156721]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:23:57 np0005558317 python3.9[156873]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:23:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:23:58 np0005558317 python3.9[157025]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:23:58 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:23:58 np0005558317 python3.9[157177]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:23:59 np0005558317 python3.9[157329]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:23:59 np0005558317 python3.9[157481]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:23:59 np0005558317 python3.9[157633]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:24:00 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:00 np0005558317 python3.9[157785]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:24:00 np0005558317 python3.9[157937]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:24:01 np0005558317 python3.9[158089]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:24:01 np0005558317 python3.9[158241]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:24:02 np0005558317 python3.9[158393]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:24:02 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:24:02 np0005558317 python3.9[158545]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 13 02:24:03 np0005558317 python3.9[158697]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 02:24:03 np0005558317 systemd[1]: Reloading.
Dec 13 02:24:03 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:24:03 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:24:04 np0005558317 python3.9[158884]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:24:04 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:04 np0005558317 python3.9[159037]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:24:05 np0005558317 python3.9[159190]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:24:05 np0005558317 python3.9[159343]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:24:06 np0005558317 python3.9[159496]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:24:06 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:06 np0005558317 python3.9[159649]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:24:07 np0005558317 python3.9[159802]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:24:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:24:08 np0005558317 python3.9[159955]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec 13 02:24:08 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:08 np0005558317 python3.9[160108]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 13 02:24:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:24:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:24:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:24:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:24:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:24:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:24:09 np0005558317 python3.9[160266]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 13 02:24:09 np0005558317 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 02:24:09 np0005558317 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 02:24:10 np0005558317 python3.9[160427]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:24:10 np0005558317 podman[160436]: 2025-12-13 07:24:10.386142908 +0000 UTC m=+0.042638768 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 02:24:10 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:10 np0005558317 python3.9[160527]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:24:12 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:12 np0005558317 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:24:12 np0005558317 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 5491 writes, 23K keys, 5491 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5491 writes, 855 syncs, 6.42 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5491 writes, 23K keys, 5491 commit groups, 1.0 writes per commit group, ingest: 18.46 MB, 0.03 MB/s#012Interval WAL: 5491 writes, 855 syncs, 6.42 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowd
Dec 13 02:24:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:24:14 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:15 np0005558317 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:24:15 np0005558317 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6995 writes, 28K keys, 6995 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6995 writes, 1406 syncs, 4.98 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6995 writes, 28K keys, 6995 commit groups, 1.0 writes per commit group, ingest: 19.58 MB, 0.03 MB/s#012Interval WAL: 6995 writes, 1406 syncs, 4.98 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec 13 02:24:16 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:24:18 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:18 np0005558317 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:24:18 np0005558317 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5589 writes, 24K keys, 5589 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5589 writes, 841 syncs, 6.65 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5586 writes, 24K keys, 5586 commit groups, 1.0 writes per commit group, ingest: 18.46 MB, 0.03 MB/s#012Interval WAL: 5587 writes, 841 syncs, 6.64 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec 13 02:24:20 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:20 np0005558317 podman[160538]: 2025-12-13 07:24:20.719800106 +0000 UTC m=+0.057952317 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 13 02:24:22 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:24:23 np0005558317 ceph-mgr[75200]: [devicehealth INFO root] Check health
Dec 13 02:24:24 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:26 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:24:28 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:30 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:32 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:24:34 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:36 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:24:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:24:38
Dec 13 02:24:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:24:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 02:24:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] pools ['backups', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.meta', 'default.rgw.log', 'images']
Dec 13 02:24:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:24:38 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:24:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:24:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:24:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:24:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:24:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:24:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:24:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:24:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:24:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:24:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:24:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:24:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:24:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:24:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:24:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:24:39 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:24:39 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:24:39 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:24:39 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:24:40 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 02:24:40 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 02:24:40 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:24:40 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:24:40 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:24:40 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:24:40 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:24:40 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:24:40 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:24:40 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:24:40 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:24:40 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:24:40 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:24:40 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:24:40 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:40 np0005558317 podman[160798]: 2025-12-13 07:24:40.549033428 +0000 UTC m=+0.028732746 container create ccf9988921e2d101b3c05eea92e4f4316b35b96565262dee32389ca23c134978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:24:40 np0005558317 systemd[1]: Started libpod-conmon-ccf9988921e2d101b3c05eea92e4f4316b35b96565262dee32389ca23c134978.scope.
Dec 13 02:24:40 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:24:40 np0005558317 podman[160798]: 2025-12-13 07:24:40.60152807 +0000 UTC m=+0.081227398 container init ccf9988921e2d101b3c05eea92e4f4316b35b96565262dee32389ca23c134978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:24:40 np0005558317 podman[160798]: 2025-12-13 07:24:40.606863003 +0000 UTC m=+0.086562331 container start ccf9988921e2d101b3c05eea92e4f4316b35b96565262dee32389ca23c134978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_sinoussi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Dec 13 02:24:40 np0005558317 podman[160798]: 2025-12-13 07:24:40.610097415 +0000 UTC m=+0.089796754 container attach ccf9988921e2d101b3c05eea92e4f4316b35b96565262dee32389ca23c134978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_sinoussi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:24:40 np0005558317 recursing_sinoussi[160815]: 167 167
Dec 13 02:24:40 np0005558317 systemd[1]: libpod-ccf9988921e2d101b3c05eea92e4f4316b35b96565262dee32389ca23c134978.scope: Deactivated successfully.
Dec 13 02:24:40 np0005558317 conmon[160815]: conmon ccf9988921e2d101b3c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ccf9988921e2d101b3c05eea92e4f4316b35b96565262dee32389ca23c134978.scope/container/memory.events
Dec 13 02:24:40 np0005558317 podman[160798]: 2025-12-13 07:24:40.612653412 +0000 UTC m=+0.092352730 container died ccf9988921e2d101b3c05eea92e4f4316b35b96565262dee32389ca23c134978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:24:40 np0005558317 systemd[1]: var-lib-containers-storage-overlay-73d3fbd7640254a7d12e1166ddaa79199f53e4979310478794f2e82f936ccebe-merged.mount: Deactivated successfully.
Dec 13 02:24:40 np0005558317 podman[160798]: 2025-12-13 07:24:40.536929645 +0000 UTC m=+0.016628983 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:24:40 np0005558317 podman[160798]: 2025-12-13 07:24:40.639611921 +0000 UTC m=+0.119311239 container remove ccf9988921e2d101b3c05eea92e4f4316b35b96565262dee32389ca23c134978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Dec 13 02:24:40 np0005558317 podman[160811]: 2025-12-13 07:24:40.644983613 +0000 UTC m=+0.072304217 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec 13 02:24:40 np0005558317 systemd[1]: libpod-conmon-ccf9988921e2d101b3c05eea92e4f4316b35b96565262dee32389ca23c134978.scope: Deactivated successfully.
Dec 13 02:24:40 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:24:40 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:24:40 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 02:24:40 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:24:40 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:24:40 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:24:40 np0005558317 podman[160861]: 2025-12-13 07:24:40.759985237 +0000 UTC m=+0.029221746 container create d1f8b2efe5636439f3a8ae392045e224e5d8febe9a93db5b505ac9fc2e6551ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 02:24:40 np0005558317 systemd[1]: Started libpod-conmon-d1f8b2efe5636439f3a8ae392045e224e5d8febe9a93db5b505ac9fc2e6551ff.scope.
Dec 13 02:24:40 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:24:40 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ab03953ee4e6fc0b4e25097479849fc22e35f072aa772767e152b2518b9632/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:24:40 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ab03953ee4e6fc0b4e25097479849fc22e35f072aa772767e152b2518b9632/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:24:40 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ab03953ee4e6fc0b4e25097479849fc22e35f072aa772767e152b2518b9632/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:24:40 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ab03953ee4e6fc0b4e25097479849fc22e35f072aa772767e152b2518b9632/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:24:40 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ab03953ee4e6fc0b4e25097479849fc22e35f072aa772767e152b2518b9632/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:24:40 np0005558317 podman[160861]: 2025-12-13 07:24:40.823989985 +0000 UTC m=+0.093226494 container init d1f8b2efe5636439f3a8ae392045e224e5d8febe9a93db5b505ac9fc2e6551ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_merkle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:24:40 np0005558317 podman[160861]: 2025-12-13 07:24:40.829104253 +0000 UTC m=+0.098340752 container start d1f8b2efe5636439f3a8ae392045e224e5d8febe9a93db5b505ac9fc2e6551ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 02:24:40 np0005558317 podman[160861]: 2025-12-13 07:24:40.830877878 +0000 UTC m=+0.100114378 container attach d1f8b2efe5636439f3a8ae392045e224e5d8febe9a93db5b505ac9fc2e6551ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 02:24:40 np0005558317 podman[160861]: 2025-12-13 07:24:40.747613269 +0000 UTC m=+0.016849789 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:24:41 np0005558317 hungry_merkle[160878]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:24:41 np0005558317 hungry_merkle[160878]: --> All data devices are unavailable
Dec 13 02:24:41 np0005558317 systemd[1]: libpod-d1f8b2efe5636439f3a8ae392045e224e5d8febe9a93db5b505ac9fc2e6551ff.scope: Deactivated successfully.
Dec 13 02:24:41 np0005558317 conmon[160878]: conmon d1f8b2efe5636439f3a8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d1f8b2efe5636439f3a8ae392045e224e5d8febe9a93db5b505ac9fc2e6551ff.scope/container/memory.events
Dec 13 02:24:41 np0005558317 podman[160861]: 2025-12-13 07:24:41.213675633 +0000 UTC m=+0.482912153 container died d1f8b2efe5636439f3a8ae392045e224e5d8febe9a93db5b505ac9fc2e6551ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_merkle, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 02:24:41 np0005558317 systemd[1]: var-lib-containers-storage-overlay-68ab03953ee4e6fc0b4e25097479849fc22e35f072aa772767e152b2518b9632-merged.mount: Deactivated successfully.
Dec 13 02:24:41 np0005558317 podman[160861]: 2025-12-13 07:24:41.236980882 +0000 UTC m=+0.506217381 container remove d1f8b2efe5636439f3a8ae392045e224e5d8febe9a93db5b505ac9fc2e6551ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:24:41 np0005558317 systemd[1]: libpod-conmon-d1f8b2efe5636439f3a8ae392045e224e5d8febe9a93db5b505ac9fc2e6551ff.scope: Deactivated successfully.
Dec 13 02:24:41 np0005558317 podman[161008]: 2025-12-13 07:24:41.565030656 +0000 UTC m=+0.025103462 container create 10d1dfdfcd37c053ce007c50265acd41851c0b69a0c404a6fb26b4e9443d4d51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:24:41 np0005558317 systemd[1]: Started libpod-conmon-10d1dfdfcd37c053ce007c50265acd41851c0b69a0c404a6fb26b4e9443d4d51.scope.
Dec 13 02:24:41 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:24:41 np0005558317 podman[161008]: 2025-12-13 07:24:41.609386032 +0000 UTC m=+0.069458857 container init 10d1dfdfcd37c053ce007c50265acd41851c0b69a0c404a6fb26b4e9443d4d51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dijkstra, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 02:24:41 np0005558317 podman[161008]: 2025-12-13 07:24:41.613976484 +0000 UTC m=+0.074049289 container start 10d1dfdfcd37c053ce007c50265acd41851c0b69a0c404a6fb26b4e9443d4d51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 02:24:41 np0005558317 podman[161008]: 2025-12-13 07:24:41.615113943 +0000 UTC m=+0.075186779 container attach 10d1dfdfcd37c053ce007c50265acd41851c0b69a0c404a6fb26b4e9443d4d51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dijkstra, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:24:41 np0005558317 laughing_dijkstra[161024]: 167 167
Dec 13 02:24:41 np0005558317 systemd[1]: libpod-10d1dfdfcd37c053ce007c50265acd41851c0b69a0c404a6fb26b4e9443d4d51.scope: Deactivated successfully.
Dec 13 02:24:41 np0005558317 podman[161008]: 2025-12-13 07:24:41.617177695 +0000 UTC m=+0.077250500 container died 10d1dfdfcd37c053ce007c50265acd41851c0b69a0c404a6fb26b4e9443d4d51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:24:41 np0005558317 systemd[1]: var-lib-containers-storage-overlay-1939bec3fa9b04377de7f827a95ce4d85c977016c3aa6b180645a03e4f7b1f43-merged.mount: Deactivated successfully.
Dec 13 02:24:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:24:41.634 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:24:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:24:41.634 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:24:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:24:41.634 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:24:41 np0005558317 podman[161008]: 2025-12-13 07:24:41.636803773 +0000 UTC m=+0.096876579 container remove 10d1dfdfcd37c053ce007c50265acd41851c0b69a0c404a6fb26b4e9443d4d51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Dec 13 02:24:41 np0005558317 podman[161008]: 2025-12-13 07:24:41.554974345 +0000 UTC m=+0.015047170 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:24:41 np0005558317 systemd[1]: libpod-conmon-10d1dfdfcd37c053ce007c50265acd41851c0b69a0c404a6fb26b4e9443d4d51.scope: Deactivated successfully.
Dec 13 02:24:41 np0005558317 podman[161054]: 2025-12-13 07:24:41.762204654 +0000 UTC m=+0.029868392 container create 7af32cd8d3fd92e1270368a2dc0182a115b1c6b5aa24d4c4b14363cadcb9a23e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 02:24:41 np0005558317 systemd[1]: Started libpod-conmon-7af32cd8d3fd92e1270368a2dc0182a115b1c6b5aa24d4c4b14363cadcb9a23e.scope.
Dec 13 02:24:41 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:24:41 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82b72cc039be561759e1b6036f2b7435892212e23ed7fb6229a19e29b6ba784/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:24:41 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82b72cc039be561759e1b6036f2b7435892212e23ed7fb6229a19e29b6ba784/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:24:41 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82b72cc039be561759e1b6036f2b7435892212e23ed7fb6229a19e29b6ba784/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:24:41 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82b72cc039be561759e1b6036f2b7435892212e23ed7fb6229a19e29b6ba784/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:24:41 np0005558317 podman[161054]: 2025-12-13 07:24:41.832109268 +0000 UTC m=+0.099773016 container init 7af32cd8d3fd92e1270368a2dc0182a115b1c6b5aa24d4c4b14363cadcb9a23e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_elbakyan, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True)
Dec 13 02:24:41 np0005558317 podman[161054]: 2025-12-13 07:24:41.837322101 +0000 UTC m=+0.104985839 container start 7af32cd8d3fd92e1270368a2dc0182a115b1c6b5aa24d4c4b14363cadcb9a23e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_elbakyan, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:24:41 np0005558317 podman[161054]: 2025-12-13 07:24:41.841183773 +0000 UTC m=+0.108847532 container attach 7af32cd8d3fd92e1270368a2dc0182a115b1c6b5aa24d4c4b14363cadcb9a23e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_elbakyan, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 02:24:41 np0005558317 podman[161054]: 2025-12-13 07:24:41.750819433 +0000 UTC m=+0.018483181 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]: {
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:    "0": [
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:        {
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "devices": [
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "/dev/loop3"
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            ],
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "lv_name": "ceph_lv0",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "lv_size": "21470642176",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "name": "ceph_lv0",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "tags": {
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.cluster_name": "ceph",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.crush_device_class": "",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.encrypted": "0",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.objectstore": "bluestore",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.osd_id": "0",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.type": "block",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.vdo": "0",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.with_tpm": "0"
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            },
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "type": "block",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "vg_name": "ceph_vg0"
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:        }
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:    ],
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:    "1": [
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:        {
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "devices": [
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "/dev/loop4"
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            ],
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "lv_name": "ceph_lv1",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "lv_size": "21470642176",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "name": "ceph_lv1",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "tags": {
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.cluster_name": "ceph",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.crush_device_class": "",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.encrypted": "0",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.objectstore": "bluestore",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.osd_id": "1",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.type": "block",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.vdo": "0",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.with_tpm": "0"
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            },
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "type": "block",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "vg_name": "ceph_vg1"
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:        }
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:    ],
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:    "2": [
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:        {
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "devices": [
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "/dev/loop5"
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            ],
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "lv_name": "ceph_lv2",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "lv_size": "21470642176",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "name": "ceph_lv2",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "tags": {
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.cluster_name": "ceph",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.crush_device_class": "",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.encrypted": "0",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.objectstore": "bluestore",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.osd_id": "2",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.type": "block",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.vdo": "0",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:                "ceph.with_tpm": "0"
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            },
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "type": "block",
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:            "vg_name": "ceph_vg2"
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:        }
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]:    ]
Dec 13 02:24:42 np0005558317 frosty_elbakyan[161071]: }
Dec 13 02:24:42 np0005558317 systemd[1]: libpod-7af32cd8d3fd92e1270368a2dc0182a115b1c6b5aa24d4c4b14363cadcb9a23e.scope: Deactivated successfully.
Dec 13 02:24:42 np0005558317 podman[161054]: 2025-12-13 07:24:42.078719797 +0000 UTC m=+0.346383536 container died 7af32cd8d3fd92e1270368a2dc0182a115b1c6b5aa24d4c4b14363cadcb9a23e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:24:42 np0005558317 systemd[1]: var-lib-containers-storage-overlay-e82b72cc039be561759e1b6036f2b7435892212e23ed7fb6229a19e29b6ba784-merged.mount: Deactivated successfully.
Dec 13 02:24:42 np0005558317 podman[161054]: 2025-12-13 07:24:42.106809092 +0000 UTC m=+0.374472820 container remove 7af32cd8d3fd92e1270368a2dc0182a115b1c6b5aa24d4c4b14363cadcb9a23e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 02:24:42 np0005558317 systemd[1]: libpod-conmon-7af32cd8d3fd92e1270368a2dc0182a115b1c6b5aa24d4c4b14363cadcb9a23e.scope: Deactivated successfully.
Dec 13 02:24:42 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:42 np0005558317 podman[161182]: 2025-12-13 07:24:42.442428753 +0000 UTC m=+0.030909590 container create b912114ed4d0064a002d0efc1ea435ff201e8f10784d0d35b8fefc4625956f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:24:42 np0005558317 systemd[1]: Started libpod-conmon-b912114ed4d0064a002d0efc1ea435ff201e8f10784d0d35b8fefc4625956f10.scope.
Dec 13 02:24:42 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:24:42 np0005558317 podman[161182]: 2025-12-13 07:24:42.487797313 +0000 UTC m=+0.076278170 container init b912114ed4d0064a002d0efc1ea435ff201e8f10784d0d35b8fefc4625956f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 02:24:42 np0005558317 podman[161182]: 2025-12-13 07:24:42.492349055 +0000 UTC m=+0.080829891 container start b912114ed4d0064a002d0efc1ea435ff201e8f10784d0d35b8fefc4625956f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_black, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 02:24:42 np0005558317 podman[161182]: 2025-12-13 07:24:42.493384872 +0000 UTC m=+0.081865709 container attach b912114ed4d0064a002d0efc1ea435ff201e8f10784d0d35b8fefc4625956f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_black, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 02:24:42 np0005558317 interesting_black[161198]: 167 167
Dec 13 02:24:42 np0005558317 systemd[1]: libpod-b912114ed4d0064a002d0efc1ea435ff201e8f10784d0d35b8fefc4625956f10.scope: Deactivated successfully.
Dec 13 02:24:42 np0005558317 podman[161182]: 2025-12-13 07:24:42.496048772 +0000 UTC m=+0.084529609 container died b912114ed4d0064a002d0efc1ea435ff201e8f10784d0d35b8fefc4625956f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_black, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 02:24:42 np0005558317 podman[161182]: 2025-12-13 07:24:42.513344769 +0000 UTC m=+0.101825606 container remove b912114ed4d0064a002d0efc1ea435ff201e8f10784d0d35b8fefc4625956f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_black, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:24:42 np0005558317 podman[161182]: 2025-12-13 07:24:42.430741475 +0000 UTC m=+0.019222322 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:24:42 np0005558317 systemd[1]: libpod-conmon-b912114ed4d0064a002d0efc1ea435ff201e8f10784d0d35b8fefc4625956f10.scope: Deactivated successfully.
Dec 13 02:24:42 np0005558317 systemd[1]: var-lib-containers-storage-overlay-2a4a0bc1f57a3d47046154ac493fc9b15538ae795514448716ea205fb0fa086f-merged.mount: Deactivated successfully.
Dec 13 02:24:42 np0005558317 podman[161228]: 2025-12-13 07:24:42.63405062 +0000 UTC m=+0.029849976 container create bce2f922c025c277dd0416eb91159bf3598321c9edff74c6c646570b0c5c342a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_villani, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:24:42 np0005558317 systemd[1]: Started libpod-conmon-bce2f922c025c277dd0416eb91159bf3598321c9edff74c6c646570b0c5c342a.scope.
Dec 13 02:24:42 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:24:42 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9c805bb5b7b45fc738300f626babcee026c2c35f2137a08eb333593d82e9848/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:24:42 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9c805bb5b7b45fc738300f626babcee026c2c35f2137a08eb333593d82e9848/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:24:42 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9c805bb5b7b45fc738300f626babcee026c2c35f2137a08eb333593d82e9848/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:24:42 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9c805bb5b7b45fc738300f626babcee026c2c35f2137a08eb333593d82e9848/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:24:42 np0005558317 podman[161228]: 2025-12-13 07:24:42.693635647 +0000 UTC m=+0.089435023 container init bce2f922c025c277dd0416eb91159bf3598321c9edff74c6c646570b0c5c342a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_villani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 02:24:42 np0005558317 podman[161228]: 2025-12-13 07:24:42.698182477 +0000 UTC m=+0.093981834 container start bce2f922c025c277dd0416eb91159bf3598321c9edff74c6c646570b0c5c342a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_villani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:24:42 np0005558317 podman[161228]: 2025-12-13 07:24:42.69935418 +0000 UTC m=+0.095153538 container attach bce2f922c025c277dd0416eb91159bf3598321c9edff74c6c646570b0c5c342a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_villani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:24:42 np0005558317 podman[161228]: 2025-12-13 07:24:42.623055432 +0000 UTC m=+0.018854809 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:24:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:24:43 np0005558317 lvm[161354]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:24:43 np0005558317 lvm[161355]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:24:43 np0005558317 lvm[161354]: VG ceph_vg0 finished
Dec 13 02:24:43 np0005558317 lvm[161355]: VG ceph_vg1 finished
Dec 13 02:24:43 np0005558317 lvm[161358]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:24:43 np0005558317 lvm[161358]: VG ceph_vg2 finished
Dec 13 02:24:43 np0005558317 stupefied_villani[161245]: {}
Dec 13 02:24:43 np0005558317 lvm[161361]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:24:43 np0005558317 lvm[161361]: VG ceph_vg2 finished
Dec 13 02:24:43 np0005558317 systemd[1]: libpod-bce2f922c025c277dd0416eb91159bf3598321c9edff74c6c646570b0c5c342a.scope: Deactivated successfully.
Dec 13 02:24:43 np0005558317 podman[161228]: 2025-12-13 07:24:43.262230063 +0000 UTC m=+0.658029419 container died bce2f922c025c277dd0416eb91159bf3598321c9edff74c6c646570b0c5c342a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 02:24:43 np0005558317 systemd[1]: var-lib-containers-storage-overlay-e9c805bb5b7b45fc738300f626babcee026c2c35f2137a08eb333593d82e9848-merged.mount: Deactivated successfully.
Dec 13 02:24:43 np0005558317 podman[161228]: 2025-12-13 07:24:43.285766697 +0000 UTC m=+0.681566054 container remove bce2f922c025c277dd0416eb91159bf3598321c9edff74c6c646570b0c5c342a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:24:43 np0005558317 systemd[1]: libpod-conmon-bce2f922c025c277dd0416eb91159bf3598321c9edff74c6c646570b0c5c342a.scope: Deactivated successfully.
Dec 13 02:24:43 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:24:43 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:24:43 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:24:43 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:24:43 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:24:43 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:24:44 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:46 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:24:48 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:24:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:24:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:24:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:24:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:24:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:24:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:24:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:24:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:24:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:24:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:24:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:24:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 02:24:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:24:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:24:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:24:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:24:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:24:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:24:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:24:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:24:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:24:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:24:50 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:51 np0005558317 podman[161404]: 2025-12-13 07:24:51.720979443 +0000 UTC m=+0.061115445 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true)
Dec 13 02:24:52 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:24:54 np0005558317 kernel: SELinux:  Converting 2769 SID table entries...
Dec 13 02:24:54 np0005558317 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 02:24:54 np0005558317 kernel: SELinux:  policy capability open_perms=1
Dec 13 02:24:54 np0005558317 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 02:24:54 np0005558317 kernel: SELinux:  policy capability always_check_network=0
Dec 13 02:24:54 np0005558317 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 02:24:54 np0005558317 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 02:24:54 np0005558317 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 02:24:54 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:56 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:24:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:24:58 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:25:00 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:25:01 np0005558317 kernel: SELinux:  Converting 2769 SID table entries...
Dec 13 02:25:01 np0005558317 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 02:25:01 np0005558317 kernel: SELinux:  policy capability open_perms=1
Dec 13 02:25:01 np0005558317 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 02:25:01 np0005558317 kernel: SELinux:  policy capability always_check_network=0
Dec 13 02:25:01 np0005558317 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 02:25:01 np0005558317 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 02:25:01 np0005558317 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 02:25:02 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:25:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:25:04 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:25:06 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:25:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:25:08 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Dec 13 02:25:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:25:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:25:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:25:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:25:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:25:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:25:10 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Dec 13 02:25:11 np0005558317 dbus-broker-launch[734]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec 13 02:25:11 np0005558317 podman[162491]: 2025-12-13 07:25:11.700597103 +0000 UTC m=+0.040640806 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 13 02:25:12 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 02:25:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:25:14 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 02:25:16 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 02:25:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:25:18 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 02:25:20 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Dec 13 02:25:22 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Dec 13 02:25:22 np0005558317 podman[173232]: 2025-12-13 07:25:22.711611001 +0000 UTC m=+0.056744391 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:25:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:25:24 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:25:26 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:25:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:25:28 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:25:30 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:25:32 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:25:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:25:34 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:25:36 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:25:37 np0005558317 kernel: SELinux:  Converting 2770 SID table entries...
Dec 13 02:25:37 np0005558317 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 02:25:37 np0005558317 kernel: SELinux:  policy capability open_perms=1
Dec 13 02:25:37 np0005558317 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 02:25:37 np0005558317 kernel: SELinux:  policy capability always_check_network=0
Dec 13 02:25:37 np0005558317 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 02:25:37 np0005558317 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 02:25:37 np0005558317 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 02:25:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:25:37 np0005558317 dbus-broker-launch[727]: Noticed file-system modification, trigger reload.
Dec 13 02:25:37 np0005558317 dbus-broker-launch[734]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec 13 02:25:37 np0005558317 dbus-broker-launch[727]: Noticed file-system modification, trigger reload.
Dec 13 02:25:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:25:38
Dec 13 02:25:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:25:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 02:25:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'backups', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr']
Dec 13 02:25:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:25:38 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:25:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:25:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:25:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:25:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:25:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:25:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:25:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:25:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:25:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:25:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:25:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:25:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:25:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:25:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:25:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:25:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:25:40 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:25:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:25:41.635 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:25:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:25:41.636 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:25:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:25:41.636 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:25:42 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:25:42 np0005558317 podman[178533]: 2025-12-13 07:25:42.735055475 +0000 UTC m=+0.067655744 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 02:25:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:25:43 np0005558317 systemd[1]: Stopping OpenSSH server daemon...
Dec 13 02:25:43 np0005558317 systemd[1]: sshd.service: Deactivated successfully.
Dec 13 02:25:43 np0005558317 systemd[1]: Stopped OpenSSH server daemon.
Dec 13 02:25:43 np0005558317 systemd[1]: sshd.service: Consumed 1.500s CPU time, read 32.0K from disk, written 0B to disk.
Dec 13 02:25:43 np0005558317 systemd[1]: Stopped target sshd-keygen.target.
Dec 13 02:25:43 np0005558317 systemd[1]: Stopping sshd-keygen.target...
Dec 13 02:25:43 np0005558317 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 13 02:25:43 np0005558317 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 13 02:25:43 np0005558317 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 13 02:25:43 np0005558317 systemd[1]: Reached target sshd-keygen.target.
Dec 13 02:25:43 np0005558317 systemd[1]: Starting OpenSSH server daemon...
Dec 13 02:25:43 np0005558317 systemd[1]: Started OpenSSH server daemon.
Dec 13 02:25:43 np0005558317 podman[179294]: 2025-12-13 07:25:43.816130989 +0000 UTC m=+0.052542562 container exec 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:25:43 np0005558317 podman[179294]: 2025-12-13 07:25:43.918741518 +0000 UTC m=+0.155153091 container exec_died 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 02:25:44 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:25:44 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:25:44 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:25:44 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:25:44 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:25:44 np0005558317 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 02:25:44 np0005558317 systemd[1]: Starting man-db-cache-update.service...
Dec 13 02:25:44 np0005558317 systemd[1]: Reloading.
Dec 13 02:25:45 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:25:45 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:25:45 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:25:45 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:25:45 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:25:45 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:25:45 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:25:45 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:25:45 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:25:45 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:25:45 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:25:45 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:25:45 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:25:45 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:25:45 np0005558317 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 02:25:45 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:25:45 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:25:45 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:25:45 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:25:45 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:25:45 np0005558317 podman[180356]: 2025-12-13 07:25:45.48059038 +0000 UTC m=+0.017546619 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:25:45 np0005558317 podman[180356]: 2025-12-13 07:25:45.953242041 +0000 UTC m=+0.490198259 container create 18d1fb2a53527d6eab8b6a67684dd5b28f750d03147f93d351ae6ac68b91ebb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hermann, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:25:46 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:25:46 np0005558317 systemd[1]: Started libpod-conmon-18d1fb2a53527d6eab8b6a67684dd5b28f750d03147f93d351ae6ac68b91ebb3.scope.
Dec 13 02:25:46 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:25:46 np0005558317 podman[180356]: 2025-12-13 07:25:46.696823943 +0000 UTC m=+1.233780181 container init 18d1fb2a53527d6eab8b6a67684dd5b28f750d03147f93d351ae6ac68b91ebb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:25:46 np0005558317 podman[180356]: 2025-12-13 07:25:46.702516575 +0000 UTC m=+1.239472793 container start 18d1fb2a53527d6eab8b6a67684dd5b28f750d03147f93d351ae6ac68b91ebb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hermann, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 02:25:46 np0005558317 podman[180356]: 2025-12-13 07:25:46.706489975 +0000 UTC m=+1.243446203 container attach 18d1fb2a53527d6eab8b6a67684dd5b28f750d03147f93d351ae6ac68b91ebb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hermann, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 02:25:46 np0005558317 great_hermann[182085]: 167 167
Dec 13 02:25:46 np0005558317 systemd[1]: libpod-18d1fb2a53527d6eab8b6a67684dd5b28f750d03147f93d351ae6ac68b91ebb3.scope: Deactivated successfully.
Dec 13 02:25:46 np0005558317 podman[180356]: 2025-12-13 07:25:46.707137853 +0000 UTC m=+1.244094071 container died 18d1fb2a53527d6eab8b6a67684dd5b28f750d03147f93d351ae6ac68b91ebb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:25:46 np0005558317 systemd[1]: var-lib-containers-storage-overlay-2950c24b9aa824c4417ddd99cae21bbda037f5125db89e2cb4dfc35edbe15c74-merged.mount: Deactivated successfully.
Dec 13 02:25:46 np0005558317 podman[180356]: 2025-12-13 07:25:46.735854631 +0000 UTC m=+1.272810850 container remove 18d1fb2a53527d6eab8b6a67684dd5b28f750d03147f93d351ae6ac68b91ebb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hermann, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 02:25:46 np0005558317 systemd[1]: libpod-conmon-18d1fb2a53527d6eab8b6a67684dd5b28f750d03147f93d351ae6ac68b91ebb3.scope: Deactivated successfully.
Dec 13 02:25:46 np0005558317 podman[182765]: 2025-12-13 07:25:46.862278609 +0000 UTC m=+0.032902549 container create 2c6dba67e68863417fb7acf8206c088fb2216780fdd017bf68b1cc3088038366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 02:25:46 np0005558317 systemd[1]: Started libpod-conmon-2c6dba67e68863417fb7acf8206c088fb2216780fdd017bf68b1cc3088038366.scope.
Dec 13 02:25:46 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:25:46 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128141ae9eca5d3fa24835d42bed329c14dae2cbac94b4a9cc76275e1e03d40f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:25:46 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128141ae9eca5d3fa24835d42bed329c14dae2cbac94b4a9cc76275e1e03d40f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:25:46 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128141ae9eca5d3fa24835d42bed329c14dae2cbac94b4a9cc76275e1e03d40f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:25:46 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128141ae9eca5d3fa24835d42bed329c14dae2cbac94b4a9cc76275e1e03d40f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:25:46 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128141ae9eca5d3fa24835d42bed329c14dae2cbac94b4a9cc76275e1e03d40f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:25:46 np0005558317 podman[182765]: 2025-12-13 07:25:46.926378218 +0000 UTC m=+0.097002167 container init 2c6dba67e68863417fb7acf8206c088fb2216780fdd017bf68b1cc3088038366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:25:46 np0005558317 podman[182765]: 2025-12-13 07:25:46.932822302 +0000 UTC m=+0.103446232 container start 2c6dba67e68863417fb7acf8206c088fb2216780fdd017bf68b1cc3088038366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 02:25:46 np0005558317 podman[182765]: 2025-12-13 07:25:46.934263621 +0000 UTC m=+0.104887551 container attach 2c6dba67e68863417fb7acf8206c088fb2216780fdd017bf68b1cc3088038366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 02:25:46 np0005558317 podman[182765]: 2025-12-13 07:25:46.847339073 +0000 UTC m=+0.017963021 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:25:47 np0005558317 inspiring_mirzakhani[182882]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:25:47 np0005558317 inspiring_mirzakhani[182882]: --> All data devices are unavailable
Dec 13 02:25:47 np0005558317 systemd[1]: libpod-2c6dba67e68863417fb7acf8206c088fb2216780fdd017bf68b1cc3088038366.scope: Deactivated successfully.
Dec 13 02:25:47 np0005558317 podman[182765]: 2025-12-13 07:25:47.287166734 +0000 UTC m=+0.457790663 container died 2c6dba67e68863417fb7acf8206c088fb2216780fdd017bf68b1cc3088038366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Dec 13 02:25:47 np0005558317 systemd[1]: var-lib-containers-storage-overlay-128141ae9eca5d3fa24835d42bed329c14dae2cbac94b4a9cc76275e1e03d40f-merged.mount: Deactivated successfully.
Dec 13 02:25:47 np0005558317 podman[182765]: 2025-12-13 07:25:47.313235934 +0000 UTC m=+0.483859863 container remove 2c6dba67e68863417fb7acf8206c088fb2216780fdd017bf68b1cc3088038366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:25:47 np0005558317 systemd[1]: libpod-conmon-2c6dba67e68863417fb7acf8206c088fb2216780fdd017bf68b1cc3088038366.scope: Deactivated successfully.
Dec 13 02:25:47 np0005558317 podman[184229]: 2025-12-13 07:25:47.665886671 +0000 UTC m=+0.034038443 container create 20a4e358ec30b91561e5f2c7ed56d64ab2e65100dcb490a15a43c7c2782dba63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:25:47 np0005558317 systemd[1]: Started libpod-conmon-20a4e358ec30b91561e5f2c7ed56d64ab2e65100dcb490a15a43c7c2782dba63.scope.
Dec 13 02:25:47 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:25:47 np0005558317 podman[184229]: 2025-12-13 07:25:47.72358239 +0000 UTC m=+0.091734183 container init 20a4e358ec30b91561e5f2c7ed56d64ab2e65100dcb490a15a43c7c2782dba63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_elbakyan, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 02:25:47 np0005558317 podman[184229]: 2025-12-13 07:25:47.728591918 +0000 UTC m=+0.096743680 container start 20a4e358ec30b91561e5f2c7ed56d64ab2e65100dcb490a15a43c7c2782dba63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 02:25:47 np0005558317 podman[184229]: 2025-12-13 07:25:47.730154405 +0000 UTC m=+0.098306178 container attach 20a4e358ec30b91561e5f2c7ed56d64ab2e65100dcb490a15a43c7c2782dba63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_elbakyan, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 02:25:47 np0005558317 charming_elbakyan[184325]: 167 167
Dec 13 02:25:47 np0005558317 podman[184229]: 2025-12-13 07:25:47.732489386 +0000 UTC m=+0.100641157 container died 20a4e358ec30b91561e5f2c7ed56d64ab2e65100dcb490a15a43c7c2782dba63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:25:47 np0005558317 systemd[1]: libpod-20a4e358ec30b91561e5f2c7ed56d64ab2e65100dcb490a15a43c7c2782dba63.scope: Deactivated successfully.
Dec 13 02:25:47 np0005558317 podman[184229]: 2025-12-13 07:25:47.651702274 +0000 UTC m=+0.019854047 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:25:47 np0005558317 systemd[1]: var-lib-containers-storage-overlay-1ac67443d899512b6803ded8429a2d1069aa42bcdbc98a21a6c3764dafa0dc8b-merged.mount: Deactivated successfully.
Dec 13 02:25:47 np0005558317 podman[184229]: 2025-12-13 07:25:47.761976823 +0000 UTC m=+0.130128594 container remove 20a4e358ec30b91561e5f2c7ed56d64ab2e65100dcb490a15a43c7c2782dba63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:25:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:25:47 np0005558317 systemd[1]: libpod-conmon-20a4e358ec30b91561e5f2c7ed56d64ab2e65100dcb490a15a43c7c2782dba63.scope: Deactivated successfully.
Dec 13 02:25:47 np0005558317 podman[184606]: 2025-12-13 07:25:47.890981942 +0000 UTC m=+0.031360859 container create c9ea200225a7e19c01a368979be588395f6871766a37ecb91976c12ea053986a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_williams, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:25:47 np0005558317 systemd[1]: Started libpod-conmon-c9ea200225a7e19c01a368979be588395f6871766a37ecb91976c12ea053986a.scope.
Dec 13 02:25:47 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:25:47 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac12634e7e1913ec503937c097eeebc42e188fd6d72ffecb86b0109d49013245/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:25:47 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac12634e7e1913ec503937c097eeebc42e188fd6d72ffecb86b0109d49013245/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:25:47 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac12634e7e1913ec503937c097eeebc42e188fd6d72ffecb86b0109d49013245/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:25:47 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac12634e7e1913ec503937c097eeebc42e188fd6d72ffecb86b0109d49013245/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:25:47 np0005558317 podman[184606]: 2025-12-13 07:25:47.946264705 +0000 UTC m=+0.086643632 container init c9ea200225a7e19c01a368979be588395f6871766a37ecb91976c12ea053986a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 02:25:47 np0005558317 podman[184606]: 2025-12-13 07:25:47.954186218 +0000 UTC m=+0.094565133 container start c9ea200225a7e19c01a368979be588395f6871766a37ecb91976c12ea053986a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_williams, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:25:47 np0005558317 podman[184606]: 2025-12-13 07:25:47.962455344 +0000 UTC m=+0.102834270 container attach c9ea200225a7e19c01a368979be588395f6871766a37ecb91976c12ea053986a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 02:25:47 np0005558317 podman[184606]: 2025-12-13 07:25:47.876051313 +0000 UTC m=+0.016430229 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:25:48 np0005558317 python3.9[184595]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 02:25:48 np0005558317 systemd[1]: Reloading.
Dec 13 02:25:48 np0005558317 elated_williams[184707]: {
Dec 13 02:25:48 np0005558317 elated_williams[184707]:    "0": [
Dec 13 02:25:48 np0005558317 elated_williams[184707]:        {
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "devices": [
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "/dev/loop3"
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            ],
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "lv_name": "ceph_lv0",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "lv_size": "21470642176",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "name": "ceph_lv0",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "tags": {
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.cluster_name": "ceph",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.crush_device_class": "",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.encrypted": "0",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.objectstore": "bluestore",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.osd_id": "0",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.type": "block",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.vdo": "0",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.with_tpm": "0"
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            },
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "type": "block",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "vg_name": "ceph_vg0"
Dec 13 02:25:48 np0005558317 elated_williams[184707]:        }
Dec 13 02:25:48 np0005558317 elated_williams[184707]:    ],
Dec 13 02:25:48 np0005558317 elated_williams[184707]:    "1": [
Dec 13 02:25:48 np0005558317 elated_williams[184707]:        {
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "devices": [
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "/dev/loop4"
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            ],
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "lv_name": "ceph_lv1",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "lv_size": "21470642176",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "name": "ceph_lv1",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "tags": {
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.cluster_name": "ceph",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.crush_device_class": "",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.encrypted": "0",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.objectstore": "bluestore",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.osd_id": "1",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.type": "block",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.vdo": "0",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.with_tpm": "0"
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            },
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "type": "block",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "vg_name": "ceph_vg1"
Dec 13 02:25:48 np0005558317 elated_williams[184707]:        }
Dec 13 02:25:48 np0005558317 elated_williams[184707]:    ],
Dec 13 02:25:48 np0005558317 elated_williams[184707]:    "2": [
Dec 13 02:25:48 np0005558317 elated_williams[184707]:        {
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "devices": [
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "/dev/loop5"
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            ],
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "lv_name": "ceph_lv2",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "lv_size": "21470642176",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "name": "ceph_lv2",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "tags": {
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.cluster_name": "ceph",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.crush_device_class": "",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.encrypted": "0",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.objectstore": "bluestore",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.osd_id": "2",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.type": "block",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.vdo": "0",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:                "ceph.with_tpm": "0"
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            },
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "type": "block",
Dec 13 02:25:48 np0005558317 elated_williams[184707]:            "vg_name": "ceph_vg2"
Dec 13 02:25:48 np0005558317 elated_williams[184707]:        }
Dec 13 02:25:48 np0005558317 elated_williams[184707]:    ]
Dec 13 02:25:48 np0005558317 elated_williams[184707]: }
Dec 13 02:25:48 np0005558317 podman[184606]: 2025-12-13 07:25:48.180843867 +0000 UTC m=+0.321222784 container died c9ea200225a7e19c01a368979be588395f6871766a37ecb91976c12ea053986a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:25:48 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:25:48 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:25:48 np0005558317 systemd[1]: libpod-c9ea200225a7e19c01a368979be588395f6871766a37ecb91976c12ea053986a.scope: Deactivated successfully.
Dec 13 02:25:48 np0005558317 systemd[1]: var-lib-containers-storage-overlay-ac12634e7e1913ec503937c097eeebc42e188fd6d72ffecb86b0109d49013245-merged.mount: Deactivated successfully.
Dec 13 02:25:48 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:25:48 np0005558317 podman[184606]: 2025-12-13 07:25:48.418599851 +0000 UTC m=+0.558978767 container remove c9ea200225a7e19c01a368979be588395f6871766a37ecb91976c12ea053986a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:25:48 np0005558317 systemd[1]: libpod-conmon-c9ea200225a7e19c01a368979be588395f6871766a37ecb91976c12ea053986a.scope: Deactivated successfully.
Dec 13 02:25:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:25:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:25:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:25:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:25:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:25:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:25:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:25:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:25:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:25:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:25:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:25:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:25:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 02:25:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:25:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:25:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:25:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:25:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:25:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:25:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:25:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:25:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:25:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:25:48 np0005558317 podman[186087]: 2025-12-13 07:25:48.783590917 +0000 UTC m=+0.032639744 container create 57028279ec2476bf55108eb4f607c592e7819d17dee8671a03aef52509299632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kalam, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 02:25:48 np0005558317 systemd[1]: Started libpod-conmon-57028279ec2476bf55108eb4f607c592e7819d17dee8671a03aef52509299632.scope.
Dec 13 02:25:48 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:25:48 np0005558317 podman[186087]: 2025-12-13 07:25:48.827420808 +0000 UTC m=+0.076469645 container init 57028279ec2476bf55108eb4f607c592e7819d17dee8671a03aef52509299632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kalam, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 02:25:48 np0005558317 podman[186087]: 2025-12-13 07:25:48.832944824 +0000 UTC m=+0.081993641 container start 57028279ec2476bf55108eb4f607c592e7819d17dee8671a03aef52509299632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 02:25:48 np0005558317 podman[186087]: 2025-12-13 07:25:48.834245048 +0000 UTC m=+0.083293886 container attach 57028279ec2476bf55108eb4f607c592e7819d17dee8671a03aef52509299632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:25:48 np0005558317 thirsty_kalam[186204]: 167 167
Dec 13 02:25:48 np0005558317 systemd[1]: libpod-57028279ec2476bf55108eb4f607c592e7819d17dee8671a03aef52509299632.scope: Deactivated successfully.
Dec 13 02:25:48 np0005558317 podman[186087]: 2025-12-13 07:25:48.837138247 +0000 UTC m=+0.086187064 container died 57028279ec2476bf55108eb4f607c592e7819d17dee8671a03aef52509299632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:25:48 np0005558317 systemd[1]: var-lib-containers-storage-overlay-2fc6c7d82c05715dbe2f3fa8ec49b3a7f73ca67f18147cb32f57f116c5bdbc8d-merged.mount: Deactivated successfully.
Dec 13 02:25:48 np0005558317 podman[186087]: 2025-12-13 07:25:48.860340308 +0000 UTC m=+0.109389125 container remove 57028279ec2476bf55108eb4f607c592e7819d17dee8671a03aef52509299632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:25:48 np0005558317 podman[186087]: 2025-12-13 07:25:48.770752752 +0000 UTC m=+0.019801589 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:25:48 np0005558317 systemd[1]: libpod-conmon-57028279ec2476bf55108eb4f607c592e7819d17dee8671a03aef52509299632.scope: Deactivated successfully.
Dec 13 02:25:48 np0005558317 podman[186439]: 2025-12-13 07:25:48.982132989 +0000 UTC m=+0.027604538 container create fafb32154f93bfe1de7008ca8b8b248391b205f35534befe629689db3b51d630 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 02:25:49 np0005558317 systemd[1]: Started libpod-conmon-fafb32154f93bfe1de7008ca8b8b248391b205f35534befe629689db3b51d630.scope.
Dec 13 02:25:49 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:25:49 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cc19cbcea4da2fc45bef94235b623dd151c23b98016264d9d840de833c2f840/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:25:49 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cc19cbcea4da2fc45bef94235b623dd151c23b98016264d9d840de833c2f840/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:25:49 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cc19cbcea4da2fc45bef94235b623dd151c23b98016264d9d840de833c2f840/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:25:49 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cc19cbcea4da2fc45bef94235b623dd151c23b98016264d9d840de833c2f840/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:25:49 np0005558317 podman[186439]: 2025-12-13 07:25:49.039063649 +0000 UTC m=+0.084535188 container init fafb32154f93bfe1de7008ca8b8b248391b205f35534befe629689db3b51d630 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_spence, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:25:49 np0005558317 podman[186439]: 2025-12-13 07:25:49.044377229 +0000 UTC m=+0.089848767 container start fafb32154f93bfe1de7008ca8b8b248391b205f35534befe629689db3b51d630 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_spence, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Dec 13 02:25:49 np0005558317 podman[186439]: 2025-12-13 07:25:49.048315633 +0000 UTC m=+0.093787172 container attach fafb32154f93bfe1de7008ca8b8b248391b205f35534befe629689db3b51d630 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_spence, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:25:49 np0005558317 python3.9[186181]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 02:25:49 np0005558317 podman[186439]: 2025-12-13 07:25:48.970431811 +0000 UTC m=+0.015903370 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:25:49 np0005558317 systemd[1]: Reloading.
Dec 13 02:25:49 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:25:49 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:25:49 np0005558317 lvm[187293]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:25:49 np0005558317 lvm[187293]: VG ceph_vg0 finished
Dec 13 02:25:49 np0005558317 lvm[187292]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:25:49 np0005558317 lvm[187292]: VG ceph_vg1 finished
Dec 13 02:25:49 np0005558317 thirsty_spence[186515]: {}
Dec 13 02:25:49 np0005558317 lvm[187319]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:25:49 np0005558317 lvm[187319]: VG ceph_vg2 finished
Dec 13 02:25:49 np0005558317 systemd[1]: libpod-fafb32154f93bfe1de7008ca8b8b248391b205f35534befe629689db3b51d630.scope: Deactivated successfully.
Dec 13 02:25:49 np0005558317 podman[186439]: 2025-12-13 07:25:49.665955603 +0000 UTC m=+0.711427132 container died fafb32154f93bfe1de7008ca8b8b248391b205f35534befe629689db3b51d630 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:25:49 np0005558317 lvm[187357]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:25:49 np0005558317 lvm[187357]: VG ceph_vg2 finished
Dec 13 02:25:49 np0005558317 systemd[1]: var-lib-containers-storage-overlay-8cc19cbcea4da2fc45bef94235b623dd151c23b98016264d9d840de833c2f840-merged.mount: Deactivated successfully.
Dec 13 02:25:49 np0005558317 podman[186439]: 2025-12-13 07:25:49.695194111 +0000 UTC m=+0.740665651 container remove fafb32154f93bfe1de7008ca8b8b248391b205f35534befe629689db3b51d630 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:25:49 np0005558317 systemd[1]: libpod-conmon-fafb32154f93bfe1de7008ca8b8b248391b205f35534befe629689db3b51d630.scope: Deactivated successfully.
Dec 13 02:25:49 np0005558317 lvm[187397]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:25:49 np0005558317 lvm[187397]: VG ceph_vg2 finished
Dec 13 02:25:49 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:25:49 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:25:49 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:25:49 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:25:50 np0005558317 python3.9[187568]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 02:25:50 np0005558317 systemd[1]: Reloading.
Dec 13 02:25:50 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:25:50 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:25:50 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:25:50 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:25:50 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:25:50 np0005558317 python3.9[188997]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 02:25:50 np0005558317 systemd[1]: Reloading.
Dec 13 02:25:51 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:25:51 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:25:51 np0005558317 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 02:25:51 np0005558317 systemd[1]: Finished man-db-cache-update.service.
Dec 13 02:25:51 np0005558317 systemd[1]: man-db-cache-update.service: Consumed 6.786s CPU time.
Dec 13 02:25:51 np0005558317 systemd[1]: run-rd85c87235c8844d2baecc0c67c88dff0.service: Deactivated successfully.
Dec 13 02:25:51 np0005558317 python3.9[189533]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:25:51 np0005558317 systemd[1]: Reloading.
Dec 13 02:25:51 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:25:51 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:25:52 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:25:52 np0005558317 python3.9[189722]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:25:52 np0005558317 systemd[1]: Reloading.
Dec 13 02:25:52 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:25:52 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:25:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:25:52 np0005558317 podman[189761]: 2025-12-13 07:25:52.992267155 +0000 UTC m=+0.067931994 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 02:25:53 np0005558317 python3.9[189936]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:25:53 np0005558317 systemd[1]: Reloading.
Dec 13 02:25:53 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:25:53 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:25:54 np0005558317 python3.9[190126]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:25:54 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:25:54 np0005558317 python3.9[190281]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:25:54 np0005558317 systemd[1]: Reloading.
Dec 13 02:25:54 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:25:54 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:25:55 np0005558317 python3.9[190471]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 02:25:55 np0005558317 systemd[1]: Reloading.
Dec 13 02:25:55 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:25:55 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:25:56 np0005558317 systemd[1]: Listening on libvirt proxy daemon socket.
Dec 13 02:25:56 np0005558317 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec 13 02:25:56 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:25:56 np0005558317 python3.9[190664]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:25:57 np0005558317 python3.9[190819]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:25:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:25:57 np0005558317 python3.9[190974]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:25:58 np0005558317 python3.9[191129]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:25:58 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:25:58 np0005558317 python3.9[191284]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:25:59 np0005558317 python3.9[191439]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:26:00 np0005558317 python3.9[191594]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:26:00 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:00 np0005558317 python3.9[191749]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:26:01 np0005558317 python3.9[191904]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:26:02 np0005558317 python3.9[192059]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:26:02 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:02 np0005558317 python3.9[192214]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:26:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:26:03 np0005558317 python3.9[192369]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:26:03 np0005558317 python3.9[192524]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:26:04 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:04 np0005558317 python3.9[192679]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:26:05 np0005558317 python3.9[192834]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:26:05 np0005558317 python3.9[192986]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:26:06 np0005558317 python3.9[193138]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:26:06 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:06 np0005558317 python3.9[193290]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:26:06 np0005558317 python3.9[193442]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:26:07 np0005558317 python3.9[193594]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:26:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:26:08 np0005558317 python3.9[193746]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:08 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:08 np0005558317 python3.9[193871]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765610767.5954168-554-206811968988169/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:26:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:26:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:26:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:26:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:26:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:26:09 np0005558317 python3.9[194023]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:09 np0005558317 python3.9[194148]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765610768.868036-554-154825060888130/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:10 np0005558317 python3.9[194300]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:10 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:10 np0005558317 python3.9[194425]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765610769.777508-554-237454709167106/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:11 np0005558317 python3.9[194577]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:11 np0005558317 python3.9[194702]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765610770.66293-554-231400239945811/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:11 np0005558317 python3.9[194854]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:12 np0005558317 python3.9[194979]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765610771.5813026-554-10069716134177/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:12 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:12 np0005558317 python3.9[195131]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:26:13 np0005558317 podman[195228]: 2025-12-13 07:26:13.038295697 +0000 UTC m=+0.042926403 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 13 02:26:13 np0005558317 python3.9[195271]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765610772.4290485-554-134739239786045/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:13 np0005558317 python3.9[195424]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:14 np0005558317 python3.9[195547]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765610773.288587-554-49847572697842/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:14 np0005558317 python3.9[195699]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:14 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:14 np0005558317 python3.9[195824]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765610774.105453-554-125570532585574/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:15 np0005558317 python3.9[195976]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec 13 02:26:15 np0005558317 python3.9[196129]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:16 np0005558317 python3.9[196281]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:16 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:16 np0005558317 python3.9[196433]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:17 np0005558317 python3.9[196585]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:17 np0005558317 python3.9[196737]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:26:17 np0005558317 python3.9[196889]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:18 np0005558317 python3.9[197041]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:18 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:18 np0005558317 python3.9[197193]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:19 np0005558317 python3.9[197345]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:19 np0005558317 python3.9[197497]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:19 np0005558317 python3.9[197649]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:20 np0005558317 python3.9[197801]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:20 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:20 np0005558317 python3.9[197953]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:21 np0005558317 python3.9[198105]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:21 np0005558317 python3.9[198257]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:22 np0005558317 python3.9[198380]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610781.3013396-775-96218803633723/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:22 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:22 np0005558317 python3.9[198532]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:26:22 np0005558317 python3.9[198655]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610782.1160512-775-194283496911389/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:23 np0005558317 podman[198779]: 2025-12-13 07:26:23.156113319 +0000 UTC m=+0.065003651 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 13 02:26:23 np0005558317 python3.9[198824]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:23 np0005558317 python3.9[198953]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610782.9213994-775-201541482522592/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:24 np0005558317 python3.9[199105]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:24 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:24 np0005558317 python3.9[199228]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610783.773347-775-265912632889406/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:24 np0005558317 python3.9[199380]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:25 np0005558317 python3.9[199503]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610784.6021693-775-39341613761582/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:25 np0005558317 python3.9[199655]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:26 np0005558317 python3.9[199778]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610785.4682457-775-224489218045234/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:26 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:26 np0005558317 python3.9[199930]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:27 np0005558317 python3.9[200053]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610786.3315778-775-61264009139138/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:27 np0005558317 python3.9[200205]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:26:27 np0005558317 python3.9[200328]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610787.1600785-775-279648148565235/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:28 np0005558317 python3.9[200480]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:28 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:28 np0005558317 python3.9[200603]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610788.0168755-775-198503001401738/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:29 np0005558317 python3.9[200755]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:29 np0005558317 python3.9[200878]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610788.8450496-775-79320314756497/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:30 np0005558317 python3.9[201030]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:30 np0005558317 python3.9[201153]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610789.6832333-775-203849921770375/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:30 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:30 np0005558317 python3.9[201305]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:31 np0005558317 python3.9[201428]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610790.5229635-775-86555331229973/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:31 np0005558317 python3.9[201580]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:32 np0005558317 python3.9[201703]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610791.3495486-775-41524285152840/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:32 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:32 np0005558317 python3.9[201855]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:26:32 np0005558317 python3.9[201978]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610792.1607678-775-420766051131/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:33 np0005558317 python3.9[202128]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:26:34 np0005558317 python3.9[202283]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec 13 02:26:34 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:35 np0005558317 auditd[673]: Audit daemon rotating log files
Dec 13 02:26:35 np0005558317 dbus-broker-launch[734]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec 13 02:26:35 np0005558317 python3.9[202439]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:35 np0005558317 python3.9[202591]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:36 np0005558317 python3.9[202743]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:36 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:36 np0005558317 python3.9[202895]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:36 np0005558317 python3.9[203047]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:37 np0005558317 python3.9[203199]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:26:37 np0005558317 python3.9[203351]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:26:38
Dec 13 02:26:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:26:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 02:26:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'vms', '.mgr', '.rgw.root']
Dec 13 02:26:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:26:38 np0005558317 python3.9[203503]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:38 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:38 np0005558317 python3.9[203655]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:26:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:26:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:26:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:26:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:26:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:26:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:26:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:26:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:26:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:26:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:26:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:26:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:26:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:26:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:26:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:26:39 np0005558317 python3.9[203807]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:39 np0005558317 python3.9[203959]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:26:39 np0005558317 systemd[1]: Reloading.
Dec 13 02:26:39 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:26:39 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:26:39 np0005558317 systemd[1]: Starting libvirt logging daemon socket...
Dec 13 02:26:39 np0005558317 systemd[1]: Listening on libvirt logging daemon socket.
Dec 13 02:26:39 np0005558317 systemd[1]: Starting libvirt logging daemon admin socket...
Dec 13 02:26:39 np0005558317 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec 13 02:26:40 np0005558317 systemd[1]: Starting libvirt logging daemon...
Dec 13 02:26:40 np0005558317 systemd[1]: Started libvirt logging daemon.
Dec 13 02:26:40 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:40 np0005558317 python3.9[204152]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:26:40 np0005558317 systemd[1]: Reloading.
Dec 13 02:26:40 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:26:40 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:26:40 np0005558317 systemd[1]: Starting libvirt nodedev daemon socket...
Dec 13 02:26:40 np0005558317 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec 13 02:26:40 np0005558317 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec 13 02:26:40 np0005558317 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec 13 02:26:40 np0005558317 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec 13 02:26:40 np0005558317 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec 13 02:26:40 np0005558317 systemd[1]: Starting libvirt nodedev daemon...
Dec 13 02:26:40 np0005558317 systemd[1]: Started libvirt nodedev daemon.
Dec 13 02:26:41 np0005558317 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec 13 02:26:41 np0005558317 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec 13 02:26:41 np0005558317 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec 13 02:26:41 np0005558317 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec 13 02:26:41 np0005558317 python3.9[204369]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:26:41 np0005558317 systemd[1]: Reloading.
Dec 13 02:26:41 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:26:41 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:26:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:26:41.636 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:26:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:26:41.636 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:26:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:26:41.636 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:26:41 np0005558317 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec 13 02:26:41 np0005558317 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec 13 02:26:41 np0005558317 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec 13 02:26:41 np0005558317 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec 13 02:26:41 np0005558317 systemd[1]: Starting libvirt proxy daemon...
Dec 13 02:26:41 np0005558317 systemd[1]: Started libvirt proxy daemon.
Dec 13 02:26:42 np0005558317 setroubleshoot[204262]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 55e08cda-00e9-470e-9032-e41e5c81e568
Dec 13 02:26:42 np0005558317 setroubleshoot[204262]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec 13 02:26:42 np0005558317 setroubleshoot[204262]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 55e08cda-00e9-470e-9032-e41e5c81e568
Dec 13 02:26:42 np0005558317 setroubleshoot[204262]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec 13 02:26:42 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:42 np0005558317 python3.9[204590]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:26:42 np0005558317 systemd[1]: Reloading.
Dec 13 02:26:42 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:26:42 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:26:42 np0005558317 systemd[1]: Listening on libvirt locking daemon socket.
Dec 13 02:26:42 np0005558317 systemd[1]: Starting libvirt QEMU daemon socket...
Dec 13 02:26:42 np0005558317 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 13 02:26:42 np0005558317 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec 13 02:26:42 np0005558317 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec 13 02:26:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:26:42 np0005558317 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec 13 02:26:42 np0005558317 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec 13 02:26:42 np0005558317 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec 13 02:26:42 np0005558317 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec 13 02:26:42 np0005558317 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec 13 02:26:42 np0005558317 systemd[1]: Starting libvirt QEMU daemon...
Dec 13 02:26:42 np0005558317 systemd[1]: Started libvirt QEMU daemon.
Dec 13 02:26:43 np0005558317 podman[204777]: 2025-12-13 07:26:43.187015137 +0000 UTC m=+0.045913781 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Dec 13 02:26:43 np0005558317 python3.9[204821]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:26:43 np0005558317 systemd[1]: Reloading.
Dec 13 02:26:43 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:26:43 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:26:43 np0005558317 systemd[1]: Starting libvirt secret daemon socket...
Dec 13 02:26:43 np0005558317 systemd[1]: Listening on libvirt secret daemon socket.
Dec 13 02:26:43 np0005558317 systemd[1]: Starting libvirt secret daemon admin socket...
Dec 13 02:26:43 np0005558317 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec 13 02:26:43 np0005558317 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec 13 02:26:43 np0005558317 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec 13 02:26:43 np0005558317 systemd[1]: Starting libvirt secret daemon...
Dec 13 02:26:43 np0005558317 systemd[1]: Started libvirt secret daemon.
Dec 13 02:26:44 np0005558317 python3.9[205033]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:44 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:44 np0005558317 python3.9[205185]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 13 02:26:45 np0005558317 python3.9[205337]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:26:45 np0005558317 python3.9[205491]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 13 02:26:46 np0005558317 python3.9[205641]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:46 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:46 np0005558317 python3.9[205762]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610805.968186-1133-28727704566000/.source.xml follow=False _original_basename=secret.xml.j2 checksum=986bd10345e3383175c34605d56e412042b35351 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:47 np0005558317 python3.9[205914]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 00fdae1b-7fad-5f1b-8734-ba4d9298a6de#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:26:47 np0005558317 python3.9[206076]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:26:48 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:26:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:26:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:26:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:26:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:26:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:26:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:26:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:26:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:26:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:26:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:26:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:26:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 02:26:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:26:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:26:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:26:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:26:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:26:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:26:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:26:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:26:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:26:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:26:49 np0005558317 python3.9[206539]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:49 np0005558317 python3.9[206691]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:50 np0005558317 python3.9[206876]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610809.4419692-1188-251716400389118/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:50 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:26:50 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:26:50 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:26:50 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:26:50 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:26:50 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:26:50 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:26:50 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:26:50 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:26:50 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:26:50 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:26:50 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:26:50 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:50 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:26:50 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:26:50 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:26:50 np0005558317 podman[207086]: 2025-12-13 07:26:50.664385509 +0000 UTC m=+0.029646662 container create 43fb120376478f4372e8b832d798721e0cb2279a20e3e9b102a9ee14add8c9c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:26:50 np0005558317 systemd[1]: Started libpod-conmon-43fb120376478f4372e8b832d798721e0cb2279a20e3e9b102a9ee14add8c9c6.scope.
Dec 13 02:26:50 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:26:50 np0005558317 podman[207086]: 2025-12-13 07:26:50.732090324 +0000 UTC m=+0.097351487 container init 43fb120376478f4372e8b832d798721e0cb2279a20e3e9b102a9ee14add8c9c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shtern, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:26:50 np0005558317 podman[207086]: 2025-12-13 07:26:50.737829795 +0000 UTC m=+0.103090949 container start 43fb120376478f4372e8b832d798721e0cb2279a20e3e9b102a9ee14add8c9c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 02:26:50 np0005558317 podman[207086]: 2025-12-13 07:26:50.738904454 +0000 UTC m=+0.104165597 container attach 43fb120376478f4372e8b832d798721e0cb2279a20e3e9b102a9ee14add8c9c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shtern, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 02:26:50 np0005558317 cool_shtern[207119]: 167 167
Dec 13 02:26:50 np0005558317 conmon[207119]: conmon 43fb120376478f4372e8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-43fb120376478f4372e8b832d798721e0cb2279a20e3e9b102a9ee14add8c9c6.scope/container/memory.events
Dec 13 02:26:50 np0005558317 systemd[1]: libpod-43fb120376478f4372e8b832d798721e0cb2279a20e3e9b102a9ee14add8c9c6.scope: Deactivated successfully.
Dec 13 02:26:50 np0005558317 podman[207086]: 2025-12-13 07:26:50.742622618 +0000 UTC m=+0.107883771 container died 43fb120376478f4372e8b832d798721e0cb2279a20e3e9b102a9ee14add8c9c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 02:26:50 np0005558317 podman[207086]: 2025-12-13 07:26:50.653028856 +0000 UTC m=+0.018290019 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:26:50 np0005558317 systemd[1]: var-lib-containers-storage-overlay-d55421f5b8d8a7b2c16e9cc095adcd2c890bc99c611698a169b1ef52c6c31191-merged.mount: Deactivated successfully.
Dec 13 02:26:50 np0005558317 podman[207086]: 2025-12-13 07:26:50.7609256 +0000 UTC m=+0.126186744 container remove 43fb120376478f4372e8b832d798721e0cb2279a20e3e9b102a9ee14add8c9c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shtern, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 02:26:50 np0005558317 systemd[1]: libpod-conmon-43fb120376478f4372e8b832d798721e0cb2279a20e3e9b102a9ee14add8c9c6.scope: Deactivated successfully.
Dec 13 02:26:50 np0005558317 python3.9[207115]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:50 np0005558317 podman[207141]: 2025-12-13 07:26:50.885656921 +0000 UTC m=+0.030465642 container create e2a8e27a232cb0b8ccc99995a7d6ea8aebf5ecab9d6c5a9c83190cbe717efc4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_moser, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:26:50 np0005558317 systemd[1]: Started libpod-conmon-e2a8e27a232cb0b8ccc99995a7d6ea8aebf5ecab9d6c5a9c83190cbe717efc4d.scope.
Dec 13 02:26:50 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:26:50 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5af597d1f3d65db9bc21c49b7e8ee23e07b310b77673758b7ccd6ba37a6fa28f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:26:50 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5af597d1f3d65db9bc21c49b7e8ee23e07b310b77673758b7ccd6ba37a6fa28f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:26:50 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5af597d1f3d65db9bc21c49b7e8ee23e07b310b77673758b7ccd6ba37a6fa28f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:26:50 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5af597d1f3d65db9bc21c49b7e8ee23e07b310b77673758b7ccd6ba37a6fa28f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:26:50 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5af597d1f3d65db9bc21c49b7e8ee23e07b310b77673758b7ccd6ba37a6fa28f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:26:50 np0005558317 podman[207141]: 2025-12-13 07:26:50.946024442 +0000 UTC m=+0.090833183 container init e2a8e27a232cb0b8ccc99995a7d6ea8aebf5ecab9d6c5a9c83190cbe717efc4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:26:50 np0005558317 podman[207141]: 2025-12-13 07:26:50.951079187 +0000 UTC m=+0.095887908 container start e2a8e27a232cb0b8ccc99995a7d6ea8aebf5ecab9d6c5a9c83190cbe717efc4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_moser, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:26:50 np0005558317 podman[207141]: 2025-12-13 07:26:50.952430405 +0000 UTC m=+0.097239126 container attach e2a8e27a232cb0b8ccc99995a7d6ea8aebf5ecab9d6c5a9c83190cbe717efc4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_moser, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:26:50 np0005558317 podman[207141]: 2025-12-13 07:26:50.874371561 +0000 UTC m=+0.019180292 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:26:51 np0005558317 modest_moser[207178]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:26:51 np0005558317 modest_moser[207178]: --> All data devices are unavailable
Dec 13 02:26:51 np0005558317 python3.9[207315]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:51 np0005558317 systemd[1]: libpod-e2a8e27a232cb0b8ccc99995a7d6ea8aebf5ecab9d6c5a9c83190cbe717efc4d.scope: Deactivated successfully.
Dec 13 02:26:51 np0005558317 podman[207328]: 2025-12-13 07:26:51.360137528 +0000 UTC m=+0.016894028 container died e2a8e27a232cb0b8ccc99995a7d6ea8aebf5ecab9d6c5a9c83190cbe717efc4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_moser, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 02:26:51 np0005558317 systemd[1]: var-lib-containers-storage-overlay-5af597d1f3d65db9bc21c49b7e8ee23e07b310b77673758b7ccd6ba37a6fa28f-merged.mount: Deactivated successfully.
Dec 13 02:26:51 np0005558317 podman[207328]: 2025-12-13 07:26:51.385087637 +0000 UTC m=+0.041844137 container remove e2a8e27a232cb0b8ccc99995a7d6ea8aebf5ecab9d6c5a9c83190cbe717efc4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_moser, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:26:51 np0005558317 systemd[1]: libpod-conmon-e2a8e27a232cb0b8ccc99995a7d6ea8aebf5ecab9d6c5a9c83190cbe717efc4d.scope: Deactivated successfully.
Dec 13 02:26:51 np0005558317 python3.9[207459]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:51 np0005558317 podman[207483]: 2025-12-13 07:26:51.732487692 +0000 UTC m=+0.032125099 container create 6f57f3d2e27be8fb222470354e8cdaac51875a7267339b36dbee2c656122c86b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:26:51 np0005558317 systemd[1]: Started libpod-conmon-6f57f3d2e27be8fb222470354e8cdaac51875a7267339b36dbee2c656122c86b.scope.
Dec 13 02:26:51 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:26:51 np0005558317 podman[207483]: 2025-12-13 07:26:51.776280798 +0000 UTC m=+0.075918224 container init 6f57f3d2e27be8fb222470354e8cdaac51875a7267339b36dbee2c656122c86b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_davinci, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:26:51 np0005558317 podman[207483]: 2025-12-13 07:26:51.781959926 +0000 UTC m=+0.081597332 container start 6f57f3d2e27be8fb222470354e8cdaac51875a7267339b36dbee2c656122c86b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:26:51 np0005558317 podman[207483]: 2025-12-13 07:26:51.783060805 +0000 UTC m=+0.082698211 container attach 6f57f3d2e27be8fb222470354e8cdaac51875a7267339b36dbee2c656122c86b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_davinci, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 02:26:51 np0005558317 cranky_davinci[207513]: 167 167
Dec 13 02:26:51 np0005558317 systemd[1]: libpod-6f57f3d2e27be8fb222470354e8cdaac51875a7267339b36dbee2c656122c86b.scope: Deactivated successfully.
Dec 13 02:26:51 np0005558317 podman[207483]: 2025-12-13 07:26:51.786052886 +0000 UTC m=+0.085690291 container died 6f57f3d2e27be8fb222470354e8cdaac51875a7267339b36dbee2c656122c86b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:26:51 np0005558317 systemd[1]: var-lib-containers-storage-overlay-9c48366593077afdbfd905c30938b275d06cc31e0580eab932b192b398585849-merged.mount: Deactivated successfully.
Dec 13 02:26:51 np0005558317 podman[207483]: 2025-12-13 07:26:51.80571978 +0000 UTC m=+0.105357186 container remove 6f57f3d2e27be8fb222470354e8cdaac51875a7267339b36dbee2c656122c86b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_davinci, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Dec 13 02:26:51 np0005558317 podman[207483]: 2025-12-13 07:26:51.721166305 +0000 UTC m=+0.020803732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:26:51 np0005558317 systemd[1]: libpod-conmon-6f57f3d2e27be8fb222470354e8cdaac51875a7267339b36dbee2c656122c86b.scope: Deactivated successfully.
Dec 13 02:26:51 np0005558317 podman[207610]: 2025-12-13 07:26:51.925815761 +0000 UTC m=+0.027561426 container create 7a14f472295a2e670d41b7bde58cd6d27b690485c52a03f338042c9830b09fac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_wescoff, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:26:51 np0005558317 systemd[1]: Started libpod-conmon-7a14f472295a2e670d41b7bde58cd6d27b690485c52a03f338042c9830b09fac.scope.
Dec 13 02:26:51 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:26:51 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4267b91c5c2089d27140b508b761e7d64215e1b573b8e94097eb1e740da499/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:26:51 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4267b91c5c2089d27140b508b761e7d64215e1b573b8e94097eb1e740da499/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:26:51 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4267b91c5c2089d27140b508b761e7d64215e1b573b8e94097eb1e740da499/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:26:51 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4267b91c5c2089d27140b508b761e7d64215e1b573b8e94097eb1e740da499/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:26:51 np0005558317 podman[207610]: 2025-12-13 07:26:51.981636933 +0000 UTC m=+0.083382606 container init 7a14f472295a2e670d41b7bde58cd6d27b690485c52a03f338042c9830b09fac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_wescoff, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:26:51 np0005558317 podman[207610]: 2025-12-13 07:26:51.987913493 +0000 UTC m=+0.089659147 container start 7a14f472295a2e670d41b7bde58cd6d27b690485c52a03f338042c9830b09fac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 02:26:51 np0005558317 podman[207610]: 2025-12-13 07:26:51.990140647 +0000 UTC m=+0.091886311 container attach 7a14f472295a2e670d41b7bde58cd6d27b690485c52a03f338042c9830b09fac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_wescoff, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 02:26:52 np0005558317 podman[207610]: 2025-12-13 07:26:51.91507238 +0000 UTC m=+0.016818054 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:26:52 np0005558317 python3.9[207680]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:52 np0005558317 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]: {
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:    "0": [
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:        {
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "devices": [
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "/dev/loop3"
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            ],
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "lv_name": "ceph_lv0",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "lv_size": "21470642176",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "name": "ceph_lv0",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "tags": {
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.cluster_name": "ceph",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.crush_device_class": "",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.encrypted": "0",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.objectstore": "bluestore",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.osd_id": "0",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.type": "block",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.vdo": "0",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.with_tpm": "0"
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            },
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "type": "block",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "vg_name": "ceph_vg0"
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:        }
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:    ],
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:    "1": [
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:        {
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "devices": [
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "/dev/loop4"
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            ],
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "lv_name": "ceph_lv1",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "lv_size": "21470642176",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "name": "ceph_lv1",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "tags": {
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.cluster_name": "ceph",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.crush_device_class": "",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.encrypted": "0",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.objectstore": "bluestore",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.osd_id": "1",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.type": "block",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.vdo": "0",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.with_tpm": "0"
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            },
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "type": "block",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "vg_name": "ceph_vg1"
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:        }
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:    ],
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:    "2": [
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:        {
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "devices": [
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "/dev/loop5"
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            ],
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "lv_name": "ceph_lv2",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "lv_size": "21470642176",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "name": "ceph_lv2",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "tags": {
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.cluster_name": "ceph",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.crush_device_class": "",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.encrypted": "0",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.objectstore": "bluestore",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.osd_id": "2",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.type": "block",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.vdo": "0",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:                "ceph.with_tpm": "0"
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            },
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "type": "block",
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:            "vg_name": "ceph_vg2"
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:        }
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]:    ]
Dec 13 02:26:52 np0005558317 vigilant_wescoff[207647]: }
Dec 13 02:26:52 np0005558317 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec 13 02:26:52 np0005558317 podman[207610]: 2025-12-13 07:26:52.242199703 +0000 UTC m=+0.343945357 container died 7a14f472295a2e670d41b7bde58cd6d27b690485c52a03f338042c9830b09fac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_wescoff, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:26:52 np0005558317 systemd[1]: libpod-7a14f472295a2e670d41b7bde58cd6d27b690485c52a03f338042c9830b09fac.scope: Deactivated successfully.
Dec 13 02:26:52 np0005558317 systemd[1]: var-lib-containers-storage-overlay-bd4267b91c5c2089d27140b508b761e7d64215e1b573b8e94097eb1e740da499-merged.mount: Deactivated successfully.
Dec 13 02:26:52 np0005558317 podman[207610]: 2025-12-13 07:26:52.265284758 +0000 UTC m=+0.367030412 container remove 7a14f472295a2e670d41b7bde58cd6d27b690485c52a03f338042c9830b09fac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 02:26:52 np0005558317 systemd[1]: libpod-conmon-7a14f472295a2e670d41b7bde58cd6d27b690485c52a03f338042c9830b09fac.scope: Deactivated successfully.
Dec 13 02:26:52 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:52 np0005558317 python3.9[207800]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.42gwbqr_ recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:52 np0005558317 podman[207846]: 2025-12-13 07:26:52.605948659 +0000 UTC m=+0.027811977 container create 37b7916ea764c3d936429503949f8c566375f558997ef8f5e02a8630ae8397a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 02:26:52 np0005558317 systemd[1]: Started libpod-conmon-37b7916ea764c3d936429503949f8c566375f558997ef8f5e02a8630ae8397a9.scope.
Dec 13 02:26:52 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:26:52 np0005558317 podman[207846]: 2025-12-13 07:26:52.657965764 +0000 UTC m=+0.079829091 container init 37b7916ea764c3d936429503949f8c566375f558997ef8f5e02a8630ae8397a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_thompson, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:26:52 np0005558317 podman[207846]: 2025-12-13 07:26:52.662891758 +0000 UTC m=+0.084755074 container start 37b7916ea764c3d936429503949f8c566375f558997ef8f5e02a8630ae8397a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 02:26:52 np0005558317 podman[207846]: 2025-12-13 07:26:52.663942331 +0000 UTC m=+0.085805648 container attach 37b7916ea764c3d936429503949f8c566375f558997ef8f5e02a8630ae8397a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_thompson, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:26:52 np0005558317 boring_thompson[207875]: 167 167
Dec 13 02:26:52 np0005558317 systemd[1]: libpod-37b7916ea764c3d936429503949f8c566375f558997ef8f5e02a8630ae8397a9.scope: Deactivated successfully.
Dec 13 02:26:52 np0005558317 podman[207846]: 2025-12-13 07:26:52.667490116 +0000 UTC m=+0.089353453 container died 37b7916ea764c3d936429503949f8c566375f558997ef8f5e02a8630ae8397a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:26:52 np0005558317 podman[207846]: 2025-12-13 07:26:52.686316813 +0000 UTC m=+0.108180130 container remove 37b7916ea764c3d936429503949f8c566375f558997ef8f5e02a8630ae8397a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 02:26:52 np0005558317 podman[207846]: 2025-12-13 07:26:52.595322038 +0000 UTC m=+0.017185374 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:26:52 np0005558317 systemd[1]: libpod-conmon-37b7916ea764c3d936429503949f8c566375f558997ef8f5e02a8630ae8397a9.scope: Deactivated successfully.
Dec 13 02:26:52 np0005558317 systemd[1]: var-lib-containers-storage-overlay-883ff7f6ee448938cdd3c436b832c005bec9caf7c3c1d3c2bd89ed590ac870d3-merged.mount: Deactivated successfully.
Dec 13 02:26:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:26:52 np0005558317 podman[207967]: 2025-12-13 07:26:52.807430025 +0000 UTC m=+0.027394321 container create 47e218630f3a946e53d6cd91126421ea325ac4e6edecbfb2dcf68ad81c473312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 02:26:52 np0005558317 systemd[1]: Started libpod-conmon-47e218630f3a946e53d6cd91126421ea325ac4e6edecbfb2dcf68ad81c473312.scope.
Dec 13 02:26:52 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:26:52 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a39c763b7ec6d90bdf0ec7a4c7c1e36b7118481296940b54d2b6b28d38b89af4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:26:52 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a39c763b7ec6d90bdf0ec7a4c7c1e36b7118481296940b54d2b6b28d38b89af4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:26:52 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a39c763b7ec6d90bdf0ec7a4c7c1e36b7118481296940b54d2b6b28d38b89af4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:26:52 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a39c763b7ec6d90bdf0ec7a4c7c1e36b7118481296940b54d2b6b28d38b89af4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:26:52 np0005558317 podman[207967]: 2025-12-13 07:26:52.862855803 +0000 UTC m=+0.082820119 container init 47e218630f3a946e53d6cd91126421ea325ac4e6edecbfb2dcf68ad81c473312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:26:52 np0005558317 podman[207967]: 2025-12-13 07:26:52.868505155 +0000 UTC m=+0.088469452 container start 47e218630f3a946e53d6cd91126421ea325ac4e6edecbfb2dcf68ad81c473312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:26:52 np0005558317 podman[207967]: 2025-12-13 07:26:52.871663688 +0000 UTC m=+0.091628005 container attach 47e218630f3a946e53d6cd91126421ea325ac4e6edecbfb2dcf68ad81c473312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:26:52 np0005558317 podman[207967]: 2025-12-13 07:26:52.796303293 +0000 UTC m=+0.016267599 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:26:53 np0005558317 python3.9[208037]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:53 np0005558317 podman[208125]: 2025-12-13 07:26:53.327391579 +0000 UTC m=+0.105729024 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 13 02:26:53 np0005558317 lvm[208212]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:26:53 np0005558317 lvm[208211]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:26:53 np0005558317 lvm[208211]: VG ceph_vg0 finished
Dec 13 02:26:53 np0005558317 lvm[208212]: VG ceph_vg1 finished
Dec 13 02:26:53 np0005558317 python3.9[208177]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:53 np0005558317 lvm[208215]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:26:53 np0005558317 lvm[208215]: VG ceph_vg2 finished
Dec 13 02:26:53 np0005558317 hardcore_morse[208011]: {}
Dec 13 02:26:53 np0005558317 lvm[208218]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:26:53 np0005558317 lvm[208218]: VG ceph_vg2 finished
Dec 13 02:26:53 np0005558317 systemd[1]: libpod-47e218630f3a946e53d6cd91126421ea325ac4e6edecbfb2dcf68ad81c473312.scope: Deactivated successfully.
Dec 13 02:26:53 np0005558317 podman[207967]: 2025-12-13 07:26:53.489037964 +0000 UTC m=+0.709002259 container died 47e218630f3a946e53d6cd91126421ea325ac4e6edecbfb2dcf68ad81c473312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:26:53 np0005558317 systemd[1]: var-lib-containers-storage-overlay-a39c763b7ec6d90bdf0ec7a4c7c1e36b7118481296940b54d2b6b28d38b89af4-merged.mount: Deactivated successfully.
Dec 13 02:26:53 np0005558317 podman[207967]: 2025-12-13 07:26:53.51324654 +0000 UTC m=+0.733210836 container remove 47e218630f3a946e53d6cd91126421ea325ac4e6edecbfb2dcf68ad81c473312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:26:53 np0005558317 systemd[1]: libpod-conmon-47e218630f3a946e53d6cd91126421ea325ac4e6edecbfb2dcf68ad81c473312.scope: Deactivated successfully.
Dec 13 02:26:53 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:26:53 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:26:53 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:26:53 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:26:53 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:26:53 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:26:53 np0005558317 python3.9[208405]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:26:54 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:54 np0005558317 python3[208558]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 13 02:26:55 np0005558317 python3.9[208710]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:55 np0005558317 python3.9[208788]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:55 np0005558317 python3.9[208940]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:56 np0005558317 python3.9[209018]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:56 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:56 np0005558317 python3.9[209170]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:57 np0005558317 python3.9[209248]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:57 np0005558317 python3.9[209400]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:26:57 np0005558317 python3.9[209478]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:58 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:26:58 np0005558317 python3.9[209630]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:58 np0005558317 python3.9[209755]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610818.048048-1313-24626378437614/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:59 np0005558317 python3.9[209907]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:59 np0005558317 python3.9[210059]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:27:00 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:00 np0005558317 python3.9[210214]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:01 np0005558317 python3.9[210366]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:27:01 np0005558317 python3.9[210519]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:27:01 np0005558317 python3.9[210673]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:27:02 np0005558317 python3.9[210828]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:02 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:27:02 np0005558317 python3.9[210980]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:27:03 np0005558317 python3.9[211103]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610822.535394-1385-2114878408640/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:03 np0005558317 python3.9[211255]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:27:04 np0005558317 python3.9[211378]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610823.3537269-1400-261279812342670/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:04 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:04 np0005558317 python3.9[211530]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:27:04 np0005558317 python3.9[211653]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610824.1968408-1415-196765772484420/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:05 np0005558317 python3.9[211805]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:27:05 np0005558317 systemd[1]: Reloading.
Dec 13 02:27:05 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:27:05 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:27:05 np0005558317 systemd[1]: Reached target edpm_libvirt.target.
Dec 13 02:27:06 np0005558317 python3.9[211995]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 13 02:27:06 np0005558317 systemd[1]: Reloading.
Dec 13 02:27:06 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:06 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:27:06 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:27:06 np0005558317 systemd[1]: Reloading.
Dec 13 02:27:06 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:27:06 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:27:07 np0005558317 systemd[1]: session-51.scope: Deactivated successfully.
Dec 13 02:27:07 np0005558317 systemd[1]: session-51.scope: Consumed 2min 26.137s CPU time.
Dec 13 02:27:07 np0005558317 systemd-logind[745]: Session 51 logged out. Waiting for processes to exit.
Dec 13 02:27:07 np0005558317 systemd-logind[745]: Removed session 51.
Dec 13 02:27:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:27:08 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:27:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:27:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:27:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:27:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:27:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:27:10 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:12 np0005558317 systemd-logind[745]: New session 52 of user zuul.
Dec 13 02:27:12 np0005558317 systemd[1]: Started Session 52 of User zuul.
Dec 13 02:27:12 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:27:13 np0005558317 python3.9[212245]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:27:13 np0005558317 podman[212326]: 2025-12-13 07:27:13.703979343 +0000 UTC m=+0.044577010 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 13 02:27:14 np0005558317 python3.9[212415]: ansible-ansible.builtin.service_facts Invoked
Dec 13 02:27:14 np0005558317 network[212432]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 02:27:14 np0005558317 network[212433]: 'network-scripts' will be removed from distribution in near future.
Dec 13 02:27:14 np0005558317 network[212434]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 02:27:14 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:16 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:16 np0005558317 python3.9[212706]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:27:17 np0005558317 python3.9[212790]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:27:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:27:18 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:20 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:22 np0005558317 python3.9[212943]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:27:22 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:27:23 np0005558317 python3.9[213095]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:27:23 np0005558317 podman[213220]: 2025-12-13 07:27:23.482174597 +0000 UTC m=+0.061544045 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:27:23 np0005558317 python3.9[213265]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:27:24 np0005558317 python3.9[213424]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:27:24 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:24 np0005558317 python3.9[213577]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:27:25 np0005558317 python3.9[213700]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610844.1860075-95-182838216523832/.source.iscsi _original_basename=.xzz_57b9 follow=False checksum=676550f67cdc4f2536cb95e8274b343680dfe66f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:25 np0005558317 python3.9[213852]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:26 np0005558317 python3.9[214004]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:26 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:27 np0005558317 python3.9[214156]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:27:27 np0005558317 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec 13 02:27:27 np0005558317 python3.9[214312]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:27:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:27:27 np0005558317 systemd[1]: Reloading.
Dec 13 02:27:27 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:27:27 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:27:28 np0005558317 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 13 02:27:28 np0005558317 systemd[1]: Starting Open-iSCSI...
Dec 13 02:27:28 np0005558317 kernel: Loading iSCSI transport class v2.0-870.
Dec 13 02:27:28 np0005558317 systemd[1]: Started Open-iSCSI.
Dec 13 02:27:28 np0005558317 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec 13 02:27:28 np0005558317 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec 13 02:27:28 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:28 np0005558317 python3.9[214512]: ansible-ansible.builtin.service_facts Invoked
Dec 13 02:27:28 np0005558317 network[214529]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 02:27:28 np0005558317 network[214530]: 'network-scripts' will be removed from distribution in near future.
Dec 13 02:27:28 np0005558317 network[214531]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 02:27:30 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:31 np0005558317 python3.9[214803]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 13 02:27:31 np0005558317 python3.9[214955]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec 13 02:27:32 np0005558317 python3.9[215111]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:27:32 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:27:32 np0005558317 python3.9[215234]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610852.1092741-172-32192198607542/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:33 np0005558317 python3.9[215386]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:34 np0005558317 python3.9[215538]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:27:34 np0005558317 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 13 02:27:34 np0005558317 systemd[1]: Stopped Load Kernel Modules.
Dec 13 02:27:34 np0005558317 systemd[1]: Stopping Load Kernel Modules...
Dec 13 02:27:34 np0005558317 systemd[1]: Starting Load Kernel Modules...
Dec 13 02:27:34 np0005558317 systemd[1]: Finished Load Kernel Modules.
Dec 13 02:27:34 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:34 np0005558317 python3.9[215694]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:27:35 np0005558317 python3.9[215846]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:27:35 np0005558317 python3.9[215998]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:27:36 np0005558317 python3.9[216150]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:27:36 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:36 np0005558317 python3.9[216273]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610855.906437-230-33455022249584/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:37 np0005558317 python3.9[216425]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:27:37 np0005558317 python3.9[216578]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:27:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:27:38
Dec 13 02:27:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:27:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 02:27:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] pools ['backups', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'vms', 'images', 'cephfs.cephfs.data', '.mgr']
Dec 13 02:27:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:27:38 np0005558317 python3.9[216730]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:38 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:38 np0005558317 python3.9[216882]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:27:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:27:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:27:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:27:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:27:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:27:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:27:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:27:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:27:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:27:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:27:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:27:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:27:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:27:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:27:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:27:39 np0005558317 python3.9[217034]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:39 np0005558317 python3.9[217186]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:40 np0005558317 python3.9[217338]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:40 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:40 np0005558317 python3.9[217490]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.697782) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610860697807, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2038, "num_deletes": 251, "total_data_size": 3582461, "memory_usage": 3630904, "flush_reason": "Manual Compaction"}
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610860704177, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3506243, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9690, "largest_seqno": 11727, "table_properties": {"data_size": 3496930, "index_size": 5935, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17751, "raw_average_key_size": 19, "raw_value_size": 3478568, "raw_average_value_size": 3810, "num_data_blocks": 269, "num_entries": 913, "num_filter_entries": 913, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610625, "oldest_key_time": 1765610625, "file_creation_time": 1765610860, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 6430 microseconds, and 4973 cpu microseconds.
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.704213) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3506243 bytes OK
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.704226) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.704617) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.704628) EVENT_LOG_v1 {"time_micros": 1765610860704625, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.704638) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3573968, prev total WAL file size 3573968, number of live WAL files 2.
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.705261) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3424KB)], [26(6153KB)]
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610860705283, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9807221, "oldest_snapshot_seqno": -1}
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3691 keys, 8220298 bytes, temperature: kUnknown
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610860721787, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8220298, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8191712, "index_size": 18236, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88610, "raw_average_key_size": 24, "raw_value_size": 8121187, "raw_average_value_size": 2200, "num_data_blocks": 792, "num_entries": 3691, "num_filter_entries": 3691, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610001, "oldest_key_time": 0, "file_creation_time": 1765610860, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.721905) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8220298 bytes
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.722195) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 593.1 rd, 497.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.0 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4205, records dropped: 514 output_compression: NoCompression
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.722208) EVENT_LOG_v1 {"time_micros": 1765610860722202, "job": 10, "event": "compaction_finished", "compaction_time_micros": 16536, "compaction_time_cpu_micros": 13344, "output_level": 6, "num_output_files": 1, "total_output_size": 8220298, "num_input_records": 4205, "num_output_records": 3691, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610860722668, "job": 10, "event": "table_file_deletion", "file_number": 28}
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610860723405, "job": 10, "event": "table_file_deletion", "file_number": 26}
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.705225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.723425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.723428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.723429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.723444) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:27:40 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.723455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:27:41 np0005558317 python3.9[217642]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:27:41 np0005558317 python3.9[217796]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:27:41.637 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:27:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:27:41.637 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:27:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:27:41.638 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:27:42 np0005558317 python3.9[217948]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:27:42 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:42 np0005558317 python3.9[218100]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:27:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:27:42 np0005558317 python3.9[218178]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:27:43 np0005558317 python3.9[218330]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:27:43 np0005558317 python3.9[218408]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:27:44 np0005558317 podman[218532]: 2025-12-13 07:27:44.046487332 +0000 UTC m=+0.040934021 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec 13 02:27:44 np0005558317 python3.9[218574]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:44 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:44 np0005558317 python3.9[218730]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:27:45 np0005558317 python3.9[218808]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:45 np0005558317 python3.9[218960]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:27:45 np0005558317 python3.9[219038]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:46 np0005558317 python3.9[219190]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:27:46 np0005558317 systemd[1]: Reloading.
Dec 13 02:27:46 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:46 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:27:46 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:27:47 np0005558317 python3.9[219380]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:27:47 np0005558317 python3.9[219458]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:27:47 np0005558317 python3.9[219610]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:27:48 np0005558317 python3.9[219688]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:48 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:27:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:27:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:27:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:27:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:27:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:27:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:27:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:27:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:27:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:27:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:27:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:27:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 02:27:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:27:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:27:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:27:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:27:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:27:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:27:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:27:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:27:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:27:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:27:48 np0005558317 python3.9[219840]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:27:48 np0005558317 systemd[1]: Reloading.
Dec 13 02:27:48 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:27:48 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:27:49 np0005558317 systemd[1]: Starting Create netns directory...
Dec 13 02:27:49 np0005558317 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 13 02:27:49 np0005558317 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 13 02:27:49 np0005558317 systemd[1]: Finished Create netns directory.
Dec 13 02:27:49 np0005558317 python3.9[220033]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:27:50 np0005558317 python3.9[220185]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:27:50 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:50 np0005558317 python3.9[220308]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610869.947117-437-229529941646148/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:27:51 np0005558317 python3.9[220460]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:27:51 np0005558317 python3.9[220612]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:27:52 np0005558317 python3.9[220735]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610871.4805546-462-14490369811053/.source.json _original_basename=.7iiheo2s follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:52 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:27:52 np0005558317 python3.9[220887]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:53 np0005558317 podman[221134]: 2025-12-13 07:27:53.619237013 +0000 UTC m=+0.056431793 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 13 02:27:54 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:27:54 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:27:54 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:27:54 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:27:54 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:27:54 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:27:54 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:27:54 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:27:54 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:27:54 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:27:54 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:27:54 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:27:54 np0005558317 podman[221477]: 2025-12-13 07:27:54.423154703 +0000 UTC m=+0.028470155 container create 0b31ba9d55a7b2a5574bcbf7fa0226d5708cdc3f29a9f89c247f64640640c4b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:27:54 np0005558317 python3.9[221466]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec 13 02:27:54 np0005558317 systemd[1]: Started libpod-conmon-0b31ba9d55a7b2a5574bcbf7fa0226d5708cdc3f29a9f89c247f64640640c4b1.scope.
Dec 13 02:27:54 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:54 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:27:54 np0005558317 podman[221477]: 2025-12-13 07:27:54.475024436 +0000 UTC m=+0.080339897 container init 0b31ba9d55a7b2a5574bcbf7fa0226d5708cdc3f29a9f89c247f64640640c4b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:27:54 np0005558317 podman[221477]: 2025-12-13 07:27:54.480928589 +0000 UTC m=+0.086244039 container start 0b31ba9d55a7b2a5574bcbf7fa0226d5708cdc3f29a9f89c247f64640640c4b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lovelace, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Dec 13 02:27:54 np0005558317 podman[221477]: 2025-12-13 07:27:54.482076616 +0000 UTC m=+0.087392068 container attach 0b31ba9d55a7b2a5574bcbf7fa0226d5708cdc3f29a9f89c247f64640640c4b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lovelace, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:27:54 np0005558317 hungry_lovelace[221491]: 167 167
Dec 13 02:27:54 np0005558317 systemd[1]: libpod-0b31ba9d55a7b2a5574bcbf7fa0226d5708cdc3f29a9f89c247f64640640c4b1.scope: Deactivated successfully.
Dec 13 02:27:54 np0005558317 conmon[221491]: conmon 0b31ba9d55a7b2a5574b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0b31ba9d55a7b2a5574bcbf7fa0226d5708cdc3f29a9f89c247f64640640c4b1.scope/container/memory.events
Dec 13 02:27:54 np0005558317 podman[221477]: 2025-12-13 07:27:54.485101334 +0000 UTC m=+0.090416786 container died 0b31ba9d55a7b2a5574bcbf7fa0226d5708cdc3f29a9f89c247f64640640c4b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 02:27:54 np0005558317 systemd[1]: var-lib-containers-storage-overlay-5afdbc94c893fd414fec3d8a615358563ce9bddd9bc6af7e36cc87eb25f73bdd-merged.mount: Deactivated successfully.
Dec 13 02:27:54 np0005558317 podman[221477]: 2025-12-13 07:27:54.504605619 +0000 UTC m=+0.109921070 container remove 0b31ba9d55a7b2a5574bcbf7fa0226d5708cdc3f29a9f89c247f64640640c4b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lovelace, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 02:27:54 np0005558317 podman[221477]: 2025-12-13 07:27:54.411502404 +0000 UTC m=+0.016817875 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:27:54 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:27:54 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:27:54 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:27:54 np0005558317 systemd[1]: libpod-conmon-0b31ba9d55a7b2a5574bcbf7fa0226d5708cdc3f29a9f89c247f64640640c4b1.scope: Deactivated successfully.
Dec 13 02:27:54 np0005558317 podman[221545]: 2025-12-13 07:27:54.623605419 +0000 UTC m=+0.026827154 container create 756b8c15debe06033fb3b9efa387931b2e4c227e4184ee5c861e387fbf9a84da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 02:27:54 np0005558317 systemd[1]: Started libpod-conmon-756b8c15debe06033fb3b9efa387931b2e4c227e4184ee5c861e387fbf9a84da.scope.
Dec 13 02:27:54 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:27:54 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69d016ae3c5fa208f8d5fe1954c23a3cdd8985a44b6c880ad113b1d96047d4e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:27:54 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69d016ae3c5fa208f8d5fe1954c23a3cdd8985a44b6c880ad113b1d96047d4e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:27:54 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69d016ae3c5fa208f8d5fe1954c23a3cdd8985a44b6c880ad113b1d96047d4e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:27:54 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69d016ae3c5fa208f8d5fe1954c23a3cdd8985a44b6c880ad113b1d96047d4e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:27:54 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69d016ae3c5fa208f8d5fe1954c23a3cdd8985a44b6c880ad113b1d96047d4e0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:27:54 np0005558317 podman[221545]: 2025-12-13 07:27:54.675747836 +0000 UTC m=+0.078969560 container init 756b8c15debe06033fb3b9efa387931b2e4c227e4184ee5c861e387fbf9a84da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_nash, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:27:54 np0005558317 podman[221545]: 2025-12-13 07:27:54.681019878 +0000 UTC m=+0.084241603 container start 756b8c15debe06033fb3b9efa387931b2e4c227e4184ee5c861e387fbf9a84da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_nash, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:27:54 np0005558317 podman[221545]: 2025-12-13 07:27:54.682009589 +0000 UTC m=+0.085231314 container attach 756b8c15debe06033fb3b9efa387931b2e4c227e4184ee5c861e387fbf9a84da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 02:27:54 np0005558317 podman[221545]: 2025-12-13 07:27:54.612727435 +0000 UTC m=+0.015949180 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:27:55 np0005558317 crazy_nash[221602]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:27:55 np0005558317 crazy_nash[221602]: --> All data devices are unavailable
Dec 13 02:27:55 np0005558317 systemd[1]: libpod-756b8c15debe06033fb3b9efa387931b2e4c227e4184ee5c861e387fbf9a84da.scope: Deactivated successfully.
Dec 13 02:27:55 np0005558317 conmon[221602]: conmon 756b8c15debe06033fb3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-756b8c15debe06033fb3b9efa387931b2e4c227e4184ee5c861e387fbf9a84da.scope/container/memory.events
Dec 13 02:27:55 np0005558317 podman[221545]: 2025-12-13 07:27:55.055687412 +0000 UTC m=+0.458909136 container died 756b8c15debe06033fb3b9efa387931b2e4c227e4184ee5c861e387fbf9a84da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_nash, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 02:27:55 np0005558317 systemd[1]: var-lib-containers-storage-overlay-69d016ae3c5fa208f8d5fe1954c23a3cdd8985a44b6c880ad113b1d96047d4e0-merged.mount: Deactivated successfully.
Dec 13 02:27:55 np0005558317 python3.9[221689]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 13 02:27:55 np0005558317 podman[221545]: 2025-12-13 07:27:55.093054484 +0000 UTC m=+0.496276210 container remove 756b8c15debe06033fb3b9efa387931b2e4c227e4184ee5c861e387fbf9a84da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 02:27:55 np0005558317 systemd[1]: libpod-conmon-756b8c15debe06033fb3b9efa387931b2e4c227e4184ee5c861e387fbf9a84da.scope: Deactivated successfully.
Dec 13 02:27:55 np0005558317 podman[221843]: 2025-12-13 07:27:55.424319617 +0000 UTC m=+0.026617712 container create ba519715a4f83f14268c54eec9fb1ca817bda1b0db7b6f135ca716dc02e3cd9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 02:27:55 np0005558317 systemd[1]: Started libpod-conmon-ba519715a4f83f14268c54eec9fb1ca817bda1b0db7b6f135ca716dc02e3cd9f.scope.
Dec 13 02:27:55 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:27:55 np0005558317 podman[221843]: 2025-12-13 07:27:55.471185831 +0000 UTC m=+0.073483946 container init ba519715a4f83f14268c54eec9fb1ca817bda1b0db7b6f135ca716dc02e3cd9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 02:27:55 np0005558317 podman[221843]: 2025-12-13 07:27:55.478245005 +0000 UTC m=+0.080543099 container start ba519715a4f83f14268c54eec9fb1ca817bda1b0db7b6f135ca716dc02e3cd9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cartwright, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Dec 13 02:27:55 np0005558317 podman[221843]: 2025-12-13 07:27:55.479468886 +0000 UTC m=+0.081766981 container attach ba519715a4f83f14268c54eec9fb1ca817bda1b0db7b6f135ca716dc02e3cd9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 02:27:55 np0005558317 quirky_cartwright[221863]: 167 167
Dec 13 02:27:55 np0005558317 systemd[1]: libpod-ba519715a4f83f14268c54eec9fb1ca817bda1b0db7b6f135ca716dc02e3cd9f.scope: Deactivated successfully.
Dec 13 02:27:55 np0005558317 podman[221843]: 2025-12-13 07:27:55.48172606 +0000 UTC m=+0.084024155 container died ba519715a4f83f14268c54eec9fb1ca817bda1b0db7b6f135ca716dc02e3cd9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cartwright, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 02:27:55 np0005558317 systemd[1]: var-lib-containers-storage-overlay-2939b91c450a5c5ca2684924eb5dd19bb91bf1e0672682bfac3191e41b96843a-merged.mount: Deactivated successfully.
Dec 13 02:27:55 np0005558317 podman[221843]: 2025-12-13 07:27:55.498278725 +0000 UTC m=+0.100576820 container remove ba519715a4f83f14268c54eec9fb1ca817bda1b0db7b6f135ca716dc02e3cd9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 02:27:55 np0005558317 podman[221843]: 2025-12-13 07:27:55.413907749 +0000 UTC m=+0.016205864 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:27:55 np0005558317 systemd[1]: libpod-conmon-ba519715a4f83f14268c54eec9fb1ca817bda1b0db7b6f135ca716dc02e3cd9f.scope: Deactivated successfully.
Dec 13 02:27:55 np0005558317 podman[221955]: 2025-12-13 07:27:55.651894796 +0000 UTC m=+0.054162104 container create 144ba1a18220709b20b9c946ad4690fe9f39bec0ec9b80f7df61f6084546b458 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 02:27:55 np0005558317 systemd[1]: Started libpod-conmon-144ba1a18220709b20b9c946ad4690fe9f39bec0ec9b80f7df61f6084546b458.scope.
Dec 13 02:27:55 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:27:55 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edd6f4d35cda917a9d432e5e649d40e3178be7c933e35eacf6b7a61d9689def/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:27:55 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edd6f4d35cda917a9d432e5e649d40e3178be7c933e35eacf6b7a61d9689def/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:27:55 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edd6f4d35cda917a9d432e5e649d40e3178be7c933e35eacf6b7a61d9689def/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:27:55 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edd6f4d35cda917a9d432e5e649d40e3178be7c933e35eacf6b7a61d9689def/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:27:55 np0005558317 podman[221955]: 2025-12-13 07:27:55.708236568 +0000 UTC m=+0.110503876 container init 144ba1a18220709b20b9c946ad4690fe9f39bec0ec9b80f7df61f6084546b458 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 02:27:55 np0005558317 podman[221955]: 2025-12-13 07:27:55.714093951 +0000 UTC m=+0.116361260 container start 144ba1a18220709b20b9c946ad4690fe9f39bec0ec9b80f7df61f6084546b458 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 02:27:55 np0005558317 podman[221955]: 2025-12-13 07:27:55.71551325 +0000 UTC m=+0.117780559 container attach 144ba1a18220709b20b9c946ad4690fe9f39bec0ec9b80f7df61f6084546b458 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wilbur, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:27:55 np0005558317 podman[221955]: 2025-12-13 07:27:55.641322356 +0000 UTC m=+0.043589685 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:27:55 np0005558317 python3.9[221952]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]: {
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:    "0": [
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:        {
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "devices": [
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "/dev/loop3"
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            ],
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "lv_name": "ceph_lv0",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "lv_size": "21470642176",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "name": "ceph_lv0",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "tags": {
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.cluster_name": "ceph",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.crush_device_class": "",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.encrypted": "0",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.objectstore": "bluestore",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.osd_id": "0",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.type": "block",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.vdo": "0",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.with_tpm": "0"
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            },
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "type": "block",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "vg_name": "ceph_vg0"
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:        }
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:    ],
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:    "1": [
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:        {
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "devices": [
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "/dev/loop4"
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            ],
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "lv_name": "ceph_lv1",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "lv_size": "21470642176",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "name": "ceph_lv1",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "tags": {
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.cluster_name": "ceph",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.crush_device_class": "",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.encrypted": "0",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.objectstore": "bluestore",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.osd_id": "1",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.type": "block",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.vdo": "0",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.with_tpm": "0"
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            },
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "type": "block",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "vg_name": "ceph_vg1"
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:        }
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:    ],
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:    "2": [
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:        {
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "devices": [
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "/dev/loop5"
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            ],
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "lv_name": "ceph_lv2",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "lv_size": "21470642176",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "name": "ceph_lv2",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "tags": {
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.cluster_name": "ceph",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.crush_device_class": "",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.encrypted": "0",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.objectstore": "bluestore",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.osd_id": "2",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.type": "block",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.vdo": "0",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:                "ceph.with_tpm": "0"
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            },
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "type": "block",
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:            "vg_name": "ceph_vg2"
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:        }
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]:    ]
Dec 13 02:27:55 np0005558317 frosty_wilbur[221968]: }
Dec 13 02:27:55 np0005558317 systemd[1]: libpod-144ba1a18220709b20b9c946ad4690fe9f39bec0ec9b80f7df61f6084546b458.scope: Deactivated successfully.
Dec 13 02:27:55 np0005558317 podman[221955]: 2025-12-13 07:27:55.950921218 +0000 UTC m=+0.353188527 container died 144ba1a18220709b20b9c946ad4690fe9f39bec0ec9b80f7df61f6084546b458 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 02:27:55 np0005558317 systemd[1]: var-lib-containers-storage-overlay-8edd6f4d35cda917a9d432e5e649d40e3178be7c933e35eacf6b7a61d9689def-merged.mount: Deactivated successfully.
Dec 13 02:27:55 np0005558317 podman[221955]: 2025-12-13 07:27:55.973198858 +0000 UTC m=+0.375466166 container remove 144ba1a18220709b20b9c946ad4690fe9f39bec0ec9b80f7df61f6084546b458 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:27:55 np0005558317 systemd[1]: libpod-conmon-144ba1a18220709b20b9c946ad4690fe9f39bec0ec9b80f7df61f6084546b458.scope: Deactivated successfully.
Dec 13 02:27:56 np0005558317 podman[222090]: 2025-12-13 07:27:56.305837101 +0000 UTC m=+0.028277021 container create 1003a1b793056f4e15c164e9a4655fd7f2bc27ec9b2913ee13137308be8b420d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 02:27:56 np0005558317 systemd[1]: Started libpod-conmon-1003a1b793056f4e15c164e9a4655fd7f2bc27ec9b2913ee13137308be8b420d.scope.
Dec 13 02:27:56 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:27:56 np0005558317 podman[222090]: 2025-12-13 07:27:56.365926681 +0000 UTC m=+0.088366591 container init 1003a1b793056f4e15c164e9a4655fd7f2bc27ec9b2913ee13137308be8b420d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_newton, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:27:56 np0005558317 podman[222090]: 2025-12-13 07:27:56.370228139 +0000 UTC m=+0.092668049 container start 1003a1b793056f4e15c164e9a4655fd7f2bc27ec9b2913ee13137308be8b420d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_newton, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:27:56 np0005558317 frosty_newton[222103]: 167 167
Dec 13 02:27:56 np0005558317 systemd[1]: libpod-1003a1b793056f4e15c164e9a4655fd7f2bc27ec9b2913ee13137308be8b420d.scope: Deactivated successfully.
Dec 13 02:27:56 np0005558317 podman[222090]: 2025-12-13 07:27:56.374045777 +0000 UTC m=+0.096485697 container attach 1003a1b793056f4e15c164e9a4655fd7f2bc27ec9b2913ee13137308be8b420d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_newton, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:27:56 np0005558317 podman[222090]: 2025-12-13 07:27:56.374676133 +0000 UTC m=+0.097116044 container died 1003a1b793056f4e15c164e9a4655fd7f2bc27ec9b2913ee13137308be8b420d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_newton, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 02:27:56 np0005558317 systemd[1]: var-lib-containers-storage-overlay-1528fbf70bf7d0d07aaefb013317ffcf72fb6c98db6244495f22ae9203940205-merged.mount: Deactivated successfully.
Dec 13 02:27:56 np0005558317 podman[222090]: 2025-12-13 07:27:56.29438604 +0000 UTC m=+0.016825970 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:27:56 np0005558317 podman[222090]: 2025-12-13 07:27:56.392418113 +0000 UTC m=+0.114858023 container remove 1003a1b793056f4e15c164e9a4655fd7f2bc27ec9b2913ee13137308be8b420d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_newton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:27:56 np0005558317 systemd[1]: libpod-conmon-1003a1b793056f4e15c164e9a4655fd7f2bc27ec9b2913ee13137308be8b420d.scope: Deactivated successfully.
Dec 13 02:27:56 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:56 np0005558317 podman[222133]: 2025-12-13 07:27:56.509934705 +0000 UTC m=+0.026076704 container create 2fe7717ee0232ff16644bb3f333235f9bb6a97cb7ce961afb4ecf41594b8821a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_curran, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 02:27:56 np0005558317 systemd[1]: Started libpod-conmon-2fe7717ee0232ff16644bb3f333235f9bb6a97cb7ce961afb4ecf41594b8821a.scope.
Dec 13 02:27:56 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:27:56 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e2838ca76fda98571ae70e876789f5799b4b77f357f8580335e016ad39ebd1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:27:56 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e2838ca76fda98571ae70e876789f5799b4b77f357f8580335e016ad39ebd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:27:56 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e2838ca76fda98571ae70e876789f5799b4b77f357f8580335e016ad39ebd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:27:56 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e2838ca76fda98571ae70e876789f5799b4b77f357f8580335e016ad39ebd1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:27:56 np0005558317 podman[222133]: 2025-12-13 07:27:56.572079649 +0000 UTC m=+0.088221667 container init 2fe7717ee0232ff16644bb3f333235f9bb6a97cb7ce961afb4ecf41594b8821a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:27:56 np0005558317 podman[222133]: 2025-12-13 07:27:56.577919821 +0000 UTC m=+0.094061820 container start 2fe7717ee0232ff16644bb3f333235f9bb6a97cb7ce961afb4ecf41594b8821a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:27:56 np0005558317 podman[222133]: 2025-12-13 07:27:56.579086765 +0000 UTC m=+0.095228784 container attach 2fe7717ee0232ff16644bb3f333235f9bb6a97cb7ce961afb4ecf41594b8821a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_curran, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 02:27:56 np0005558317 podman[222133]: 2025-12-13 07:27:56.500195863 +0000 UTC m=+0.016337882 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:27:57 np0005558317 python3[222279]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 13 02:27:57 np0005558317 lvm[222362]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:27:57 np0005558317 lvm[222361]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:27:57 np0005558317 lvm[222361]: VG ceph_vg0 finished
Dec 13 02:27:57 np0005558317 lvm[222362]: VG ceph_vg1 finished
Dec 13 02:27:57 np0005558317 lvm[222365]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:27:57 np0005558317 lvm[222365]: VG ceph_vg2 finished
Dec 13 02:27:57 np0005558317 dreamy_curran[222190]: {}
Dec 13 02:27:57 np0005558317 lvm[222367]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:27:57 np0005558317 lvm[222367]: VG ceph_vg0 finished
Dec 13 02:27:57 np0005558317 systemd[1]: libpod-2fe7717ee0232ff16644bb3f333235f9bb6a97cb7ce961afb4ecf41594b8821a.scope: Deactivated successfully.
Dec 13 02:27:57 np0005558317 podman[222133]: 2025-12-13 07:27:57.226419031 +0000 UTC m=+0.742561030 container died 2fe7717ee0232ff16644bb3f333235f9bb6a97cb7ce961afb4ecf41594b8821a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_curran, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:27:57 np0005558317 systemd[1]: var-lib-containers-storage-overlay-23e2838ca76fda98571ae70e876789f5799b4b77f357f8580335e016ad39ebd1-merged.mount: Deactivated successfully.
Dec 13 02:27:57 np0005558317 podman[222133]: 2025-12-13 07:27:57.252501426 +0000 UTC m=+0.768643425 container remove 2fe7717ee0232ff16644bb3f333235f9bb6a97cb7ce961afb4ecf41594b8821a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:27:57 np0005558317 systemd[1]: libpod-conmon-2fe7717ee0232ff16644bb3f333235f9bb6a97cb7ce961afb4ecf41594b8821a.scope: Deactivated successfully.
Dec 13 02:27:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:27:57 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:27:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:27:57 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:27:57 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:27:57 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:27:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:27:58 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:27:58 np0005558317 podman[222345]: 2025-12-13 07:27:58.901983166 +0000 UTC m=+1.804118171 image pull bcd3898ac099c7fff3d2ff3fc32de931119ed36068f8a2617bd8fa95e51d1b81 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f
Dec 13 02:27:58 np0005558317 podman[222439]: 2025-12-13 07:27:58.997056327 +0000 UTC m=+0.027861841 container create f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 02:27:58 np0005558317 podman[222439]: 2025-12-13 07:27:58.984015946 +0000 UTC m=+0.014821460 image pull bcd3898ac099c7fff3d2ff3fc32de931119ed36068f8a2617bd8fa95e51d1b81 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f
Dec 13 02:27:59 np0005558317 python3[222279]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f
Dec 13 02:27:59 np0005558317 python3.9[222619]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:28:00 np0005558317 python3.9[222773]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:00 np0005558317 python3.9[222849]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:28:00 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:00 np0005558317 python3.9[223000]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765610880.4340858-550-196929256557919/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:01 np0005558317 python3.9[223076]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 02:28:01 np0005558317 systemd[1]: Reloading.
Dec 13 02:28:01 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:28:01 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:28:02 np0005558317 python3.9[223187]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:28:02 np0005558317 systemd[1]: Reloading.
Dec 13 02:28:02 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:28:02 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:28:02 np0005558317 systemd[1]: Starting multipathd container...
Dec 13 02:28:02 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:02 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:28:02 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c900ea433a65f9109f55cb6fe57286edb5b6f4dee643c19d1929e16cfeec254/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 13 02:28:02 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c900ea433a65f9109f55cb6fe57286edb5b6f4dee643c19d1929e16cfeec254/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 13 02:28:02 np0005558317 systemd[1]: Started /usr/bin/podman healthcheck run f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6.
Dec 13 02:28:02 np0005558317 podman[223227]: 2025-12-13 07:28:02.488238865 +0000 UTC m=+0.082340820 container init f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 02:28:02 np0005558317 multipathd[223239]: + sudo -E kolla_set_configs
Dec 13 02:28:02 np0005558317 podman[223227]: 2025-12-13 07:28:02.5067429 +0000 UTC m=+0.100844834 container start f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 13 02:28:02 np0005558317 podman[223227]: multipathd
Dec 13 02:28:02 np0005558317 systemd[1]: Started multipathd container.
Dec 13 02:28:02 np0005558317 multipathd[223239]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 13 02:28:02 np0005558317 multipathd[223239]: INFO:__main__:Validating config file
Dec 13 02:28:02 np0005558317 multipathd[223239]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 13 02:28:02 np0005558317 multipathd[223239]: INFO:__main__:Writing out command to execute
Dec 13 02:28:02 np0005558317 multipathd[223239]: ++ cat /run_command
Dec 13 02:28:02 np0005558317 multipathd[223239]: + CMD='/usr/sbin/multipathd -d'
Dec 13 02:28:02 np0005558317 multipathd[223239]: + ARGS=
Dec 13 02:28:02 np0005558317 multipathd[223239]: + sudo kolla_copy_cacerts
Dec 13 02:28:02 np0005558317 multipathd[223239]: + [[ ! -n '' ]]
Dec 13 02:28:02 np0005558317 multipathd[223239]: + . kolla_extend_start
Dec 13 02:28:02 np0005558317 multipathd[223239]: Running command: '/usr/sbin/multipathd -d'
Dec 13 02:28:02 np0005558317 multipathd[223239]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 13 02:28:02 np0005558317 multipathd[223239]: + umask 0022
Dec 13 02:28:02 np0005558317 multipathd[223239]: + exec /usr/sbin/multipathd -d
Dec 13 02:28:02 np0005558317 multipathd[223239]: 2756.184926 | --------start up--------
Dec 13 02:28:02 np0005558317 multipathd[223239]: 2756.184938 | read /etc/multipath.conf
Dec 13 02:28:02 np0005558317 podman[223246]: 2025-12-13 07:28:02.590046739 +0000 UTC m=+0.075567903 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 13 02:28:02 np0005558317 multipathd[223239]: 2756.188612 | path checkers start up
Dec 13 02:28:02 np0005558317 systemd[1]: f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6-4328e0a6a7ac1fb0.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 02:28:02 np0005558317 systemd[1]: f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6-4328e0a6a7ac1fb0.service: Failed with result 'exit-code'.
Dec 13 02:28:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:28:02 np0005558317 python3.9[223425]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:28:03 np0005558317 python3.9[223579]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:28:04 np0005558317 python3.9[223740]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:28:04 np0005558317 systemd[1]: Stopping multipathd container...
Dec 13 02:28:04 np0005558317 multipathd[223239]: 2757.729018 | exit (signal)
Dec 13 02:28:04 np0005558317 multipathd[223239]: 2757.729067 | --------shut down-------
Dec 13 02:28:04 np0005558317 systemd[1]: libpod-f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6.scope: Deactivated successfully.
Dec 13 02:28:04 np0005558317 conmon[223239]: conmon f696b337a701eeb12548 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6.scope/container/memory.events
Dec 13 02:28:04 np0005558317 podman[223744]: 2025-12-13 07:28:04.165125559 +0000 UTC m=+0.056709544 container died f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Dec 13 02:28:04 np0005558317 systemd[1]: f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6-4328e0a6a7ac1fb0.timer: Deactivated successfully.
Dec 13 02:28:04 np0005558317 systemd[1]: Stopped /usr/bin/podman healthcheck run f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6.
Dec 13 02:28:04 np0005558317 systemd[1]: var-lib-containers-storage-overlay-7c900ea433a65f9109f55cb6fe57286edb5b6f4dee643c19d1929e16cfeec254-merged.mount: Deactivated successfully.
Dec 13 02:28:04 np0005558317 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6-userdata-shm.mount: Deactivated successfully.
Dec 13 02:28:04 np0005558317 podman[223744]: 2025-12-13 07:28:04.24670215 +0000 UTC m=+0.138286135 container cleanup f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 02:28:04 np0005558317 podman[223744]: multipathd
Dec 13 02:28:04 np0005558317 podman[223766]: multipathd
Dec 13 02:28:04 np0005558317 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec 13 02:28:04 np0005558317 systemd[1]: Stopped multipathd container.
Dec 13 02:28:04 np0005558317 systemd[1]: Starting multipathd container...
Dec 13 02:28:04 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:28:04 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c900ea433a65f9109f55cb6fe57286edb5b6f4dee643c19d1929e16cfeec254/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 13 02:28:04 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c900ea433a65f9109f55cb6fe57286edb5b6f4dee643c19d1929e16cfeec254/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 13 02:28:04 np0005558317 systemd[1]: Started /usr/bin/podman healthcheck run f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6.
Dec 13 02:28:04 np0005558317 podman[223775]: 2025-12-13 07:28:04.385999266 +0000 UTC m=+0.075816672 container init f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 02:28:04 np0005558317 multipathd[223786]: + sudo -E kolla_set_configs
Dec 13 02:28:04 np0005558317 podman[223775]: 2025-12-13 07:28:04.403074253 +0000 UTC m=+0.092891649 container start f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 02:28:04 np0005558317 podman[223775]: multipathd
Dec 13 02:28:04 np0005558317 systemd[1]: Started multipathd container.
Dec 13 02:28:04 np0005558317 multipathd[223786]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 13 02:28:04 np0005558317 multipathd[223786]: INFO:__main__:Validating config file
Dec 13 02:28:04 np0005558317 multipathd[223786]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 13 02:28:04 np0005558317 multipathd[223786]: INFO:__main__:Writing out command to execute
Dec 13 02:28:04 np0005558317 multipathd[223786]: ++ cat /run_command
Dec 13 02:28:04 np0005558317 multipathd[223786]: + CMD='/usr/sbin/multipathd -d'
Dec 13 02:28:04 np0005558317 multipathd[223786]: + ARGS=
Dec 13 02:28:04 np0005558317 multipathd[223786]: + sudo kolla_copy_cacerts
Dec 13 02:28:04 np0005558317 podman[223794]: 2025-12-13 07:28:04.456128394 +0000 UTC m=+0.046975442 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 02:28:04 np0005558317 systemd[1]: f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6-407c9b11ead522c2.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 02:28:04 np0005558317 systemd[1]: f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6-407c9b11ead522c2.service: Failed with result 'exit-code'.
Dec 13 02:28:04 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:04 np0005558317 multipathd[223786]: + [[ ! -n '' ]]
Dec 13 02:28:04 np0005558317 multipathd[223786]: + . kolla_extend_start
Dec 13 02:28:04 np0005558317 multipathd[223786]: Running command: '/usr/sbin/multipathd -d'
Dec 13 02:28:04 np0005558317 multipathd[223786]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 13 02:28:04 np0005558317 multipathd[223786]: + umask 0022
Dec 13 02:28:04 np0005558317 multipathd[223786]: + exec /usr/sbin/multipathd -d
Dec 13 02:28:04 np0005558317 multipathd[223786]: 2758.075287 | --------start up--------
Dec 13 02:28:04 np0005558317 multipathd[223786]: 2758.075300 | read /etc/multipath.conf
Dec 13 02:28:04 np0005558317 multipathd[223786]: 2758.079325 | path checkers start up
Dec 13 02:28:04 np0005558317 python3.9[223975]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:05 np0005558317 python3.9[224127]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 13 02:28:05 np0005558317 python3.9[224279]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec 13 02:28:05 np0005558317 kernel: Key type psk registered
Dec 13 02:28:06 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:06 np0005558317 python3.9[224442]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:28:06 np0005558317 python3.9[224565]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610886.140287-630-186777551401996/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:07 np0005558317 python3.9[224717]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:28:08 np0005558317 python3.9[224869]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:28:08 np0005558317 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 13 02:28:08 np0005558317 systemd[1]: Stopped Load Kernel Modules.
Dec 13 02:28:08 np0005558317 systemd[1]: Stopping Load Kernel Modules...
Dec 13 02:28:08 np0005558317 systemd[1]: Starting Load Kernel Modules...
Dec 13 02:28:08 np0005558317 systemd[1]: Finished Load Kernel Modules.
Dec 13 02:28:08 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:08 np0005558317 python3.9[225025]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:28:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:28:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:28:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:28:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:28:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:28:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:28:10 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:10 np0005558317 systemd[1]: Reloading.
Dec 13 02:28:10 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:28:10 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:28:10 np0005558317 systemd[1]: Reloading.
Dec 13 02:28:10 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:28:10 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:28:11 np0005558317 systemd-logind[745]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 13 02:28:11 np0005558317 systemd-logind[745]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 13 02:28:11 np0005558317 lvm[225137]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:28:11 np0005558317 lvm[225135]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:28:11 np0005558317 lvm[225135]: VG ceph_vg2 finished
Dec 13 02:28:11 np0005558317 lvm[225134]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:28:11 np0005558317 lvm[225134]: VG ceph_vg0 finished
Dec 13 02:28:11 np0005558317 lvm[225137]: VG ceph_vg1 finished
Dec 13 02:28:11 np0005558317 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 02:28:11 np0005558317 systemd[1]: Starting man-db-cache-update.service...
Dec 13 02:28:11 np0005558317 systemd[1]: Reloading.
Dec 13 02:28:11 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:28:11 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:28:11 np0005558317 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 02:28:12 np0005558317 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 02:28:12 np0005558317 systemd[1]: Finished man-db-cache-update.service.
Dec 13 02:28:12 np0005558317 systemd[1]: man-db-cache-update.service: Consumed 1.044s CPU time.
Dec 13 02:28:12 np0005558317 systemd[1]: run-r0ca2529a3429451fbec8a693ae3bacf2.service: Deactivated successfully.
Dec 13 02:28:12 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:12 np0005558317 python3.9[226487]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:28:12 np0005558317 systemd[1]: Stopping Open-iSCSI...
Dec 13 02:28:12 np0005558317 iscsid[214351]: iscsid shutting down.
Dec 13 02:28:12 np0005558317 systemd[1]: iscsid.service: Deactivated successfully.
Dec 13 02:28:12 np0005558317 systemd[1]: Stopped Open-iSCSI.
Dec 13 02:28:12 np0005558317 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 13 02:28:12 np0005558317 systemd[1]: Starting Open-iSCSI...
Dec 13 02:28:12 np0005558317 systemd[1]: Started Open-iSCSI.
Dec 13 02:28:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:28:13 np0005558317 python3.9[226642]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:28:13 np0005558317 python3.9[226798]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:14 np0005558317 podman[226922]: 2025-12-13 07:28:14.281927133 +0000 UTC m=+0.041756527 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 02:28:14 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:14 np0005558317 python3.9[226964]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 02:28:14 np0005558317 systemd[1]: Reloading.
Dec 13 02:28:14 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:28:14 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:28:15 np0005558317 python3.9[227151]: ansible-ansible.builtin.service_facts Invoked
Dec 13 02:28:15 np0005558317 network[227168]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 02:28:15 np0005558317 network[227169]: 'network-scripts' will be removed from distribution in near future.
Dec 13 02:28:15 np0005558317 network[227170]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 02:28:16 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:28:17 np0005558317 python3.9[227445]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:28:18 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:18 np0005558317 python3.9[227598]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:28:18 np0005558317 python3.9[227751]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:28:19 np0005558317 python3.9[227904]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:28:20 np0005558317 python3.9[228057]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:28:20 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:20 np0005558317 python3.9[228210]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:28:21 np0005558317 python3.9[228363]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:28:21 np0005558317 python3.9[228516]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:28:22 np0005558317 python3.9[228669]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:22 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:22 np0005558317 python3.9[228821]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:28:22 np0005558317 python3.9[228973]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:23 np0005558317 python3.9[229125]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:23 np0005558317 python3.9[229277]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:23 np0005558317 podman[229278]: 2025-12-13 07:28:23.714523569 +0000 UTC m=+0.053831692 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 13 02:28:24 np0005558317 python3.9[229452]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:24 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:24 np0005558317 python3.9[229604]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:24 np0005558317 python3.9[229756]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:25 np0005558317 python3.9[229908]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:25 np0005558317 python3.9[230060]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:26 np0005558317 python3.9[230212]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:26 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:26 np0005558317 python3.9[230364]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:27 np0005558317 python3.9[230516]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:27 np0005558317 python3.9[230668]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:28:27 np0005558317 python3.9[230820]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:28 np0005558317 python3.9[230972]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:28 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:28 np0005558317 python3.9[231124]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:28:29 np0005558317 python3.9[231276]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 13 02:28:30 np0005558317 python3.9[231428]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 02:28:30 np0005558317 systemd[1]: Reloading.
Dec 13 02:28:30 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:28:30 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:28:30 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:30 np0005558317 python3.9[231616]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:28:31 np0005558317 python3.9[231769]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:28:31 np0005558317 python3.9[231922]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:28:32 np0005558317 python3.9[232075]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:28:32 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:32 np0005558317 python3.9[232228]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:28:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:28:32 np0005558317 python3.9[232381]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:28:33 np0005558317 python3.9[232534]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:28:33 np0005558317 python3.9[232687]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:28:34 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:34 np0005558317 podman[232765]: 2025-12-13 07:28:34.702694564 +0000 UTC m=+0.043036091 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 02:28:34 np0005558317 python3.9[232857]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:28:35 np0005558317 python3.9[233009]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:28:35 np0005558317 python3.9[233161]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:28:36 np0005558317 python3.9[233313]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:28:36 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:36 np0005558317 python3.9[233465]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:28:37 np0005558317 python3.9[233617]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:28:37 np0005558317 python3.9[233769]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:28:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:28:37 np0005558317 python3.9[233921]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:28:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:28:38
Dec 13 02:28:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:28:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 02:28:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] pools ['default.rgw.control', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', '.mgr', '.rgw.root', 'volumes', 'backups']
Dec 13 02:28:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:28:38 np0005558317 python3.9[234073]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:28:38 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:38 np0005558317 python3.9[234225]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:28:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:28:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:28:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:28:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:28:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:28:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:28:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:28:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:28:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:28:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:28:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:28:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:28:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:28:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:28:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:28:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:28:40 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:28:41.637 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:28:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:28:41.637 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:28:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:28:41.638 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:28:41 np0005558317 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 13 02:28:42 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:28:43 np0005558317 python3.9[234379]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec 13 02:28:43 np0005558317 python3.9[234532]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 13 02:28:44 np0005558317 python3.9[234690]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 13 02:28:44 np0005558317 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 02:28:44 np0005558317 podman[234692]: 2025-12-13 07:28:44.399226573 +0000 UTC m=+0.070919121 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 13 02:28:44 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:45 np0005558317 systemd-logind[745]: New session 53 of user zuul.
Dec 13 02:28:45 np0005558317 systemd[1]: Started Session 53 of User zuul.
Dec 13 02:28:45 np0005558317 systemd[1]: session-53.scope: Deactivated successfully.
Dec 13 02:28:45 np0005558317 systemd-logind[745]: Session 53 logged out. Waiting for processes to exit.
Dec 13 02:28:45 np0005558317 systemd-logind[745]: Removed session 53.
Dec 13 02:28:45 np0005558317 python3.9[234893]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:28:45 np0005558317 python3.9[235014]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610925.2766905-1249-19431420739786/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:28:46 np0005558317 python3.9[235164]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:28:46 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:46 np0005558317 python3.9[235240]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:28:47 np0005558317 python3.9[235390]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:28:47 np0005558317 python3.9[235511]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610926.7019405-1249-219414858594350/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:28:47 np0005558317 python3.9[235661]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:28:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:28:48 np0005558317 python3.9[235782]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610927.4570239-1249-121539690938640/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:28:48 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:28:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:28:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:28:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:28:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:28:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:28:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:28:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:28:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:28:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:28:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:28:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:28:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 02:28:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:28:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:28:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:28:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:28:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:28:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:28:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:28:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:28:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:28:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:28:48 np0005558317 python3.9[235932]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:28:48 np0005558317 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 13 02:28:48 np0005558317 systemd[1]: virtqemud.service: Deactivated successfully.
Dec 13 02:28:48 np0005558317 python3.9[236055]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610928.2436867-1249-236448744192640/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:28:49 np0005558317 python3.9[236205]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:28:49 np0005558317 python3.9[236326]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610929.045829-1249-189956654126808/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:28:50 np0005558317 python3.9[236478]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:50 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:50 np0005558317 python3.9[236630]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:51 np0005558317 python3.9[236782]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:28:51 np0005558317 python3.9[236934]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:28:51 np0005558317 python3.9[237057]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1765610931.257735-1356-109280930427810/.source _original_basename=.4ku6dlpf follow=False checksum=c10c61059eb90b078036f954336eace3871dfb1d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec 13 02:28:52 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:52 np0005558317 python3.9[237209]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:28:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:28:52 np0005558317 python3.9[237361]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:28:53 np0005558317 python3.9[237482]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610932.6285732-1382-160797082728960/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=209f20105d13c02e6cb251483bae1beb11a1258f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:28:53 np0005558317 python3.9[237632]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:28:54 np0005558317 podman[237727]: 2025-12-13 07:28:54.053589483 +0000 UTC m=+0.059463980 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 02:28:54 np0005558317 python3.9[237763]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610933.4733877-1397-239489721690463/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=0333d3a3f5c3a0526b0ebe430250032166710e8a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:28:54 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:54 np0005558317 python3.9[237928]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec 13 02:28:55 np0005558317 python3.9[238080]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 13 02:28:55 np0005558317 python3[238232]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec 13 02:28:56 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:28:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:28:58 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:28:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:28:58 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:28:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:28:58 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:28:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:28:58 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:28:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:28:58 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:28:58 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:28:58 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:28:58 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:28:58 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:28:58 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:28:58 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:28:58 np0005558317 podman[238402]: 2025-12-13 07:28:58.671999411 +0000 UTC m=+0.075335244 container create 6eb5faac63842ff0796c2ec8eebdbcf8a6c64837ec705bb4fda9857b57c6c488 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 02:28:58 np0005558317 systemd[1]: Started libpod-conmon-6eb5faac63842ff0796c2ec8eebdbcf8a6c64837ec705bb4fda9857b57c6c488.scope.
Dec 13 02:28:58 np0005558317 podman[238402]: 2025-12-13 07:28:58.636698934 +0000 UTC m=+0.040034778 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:28:58 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:28:58 np0005558317 podman[238402]: 2025-12-13 07:28:58.734376949 +0000 UTC m=+0.137712782 container init 6eb5faac63842ff0796c2ec8eebdbcf8a6c64837ec705bb4fda9857b57c6c488 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_cerf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 02:28:58 np0005558317 podman[238402]: 2025-12-13 07:28:58.740085131 +0000 UTC m=+0.143420944 container start 6eb5faac63842ff0796c2ec8eebdbcf8a6c64837ec705bb4fda9857b57c6c488 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 02:28:58 np0005558317 podman[238402]: 2025-12-13 07:28:58.741885056 +0000 UTC m=+0.145220879 container attach 6eb5faac63842ff0796c2ec8eebdbcf8a6c64837ec705bb4fda9857b57c6c488 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_cerf, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:28:58 np0005558317 blissful_cerf[238415]: 167 167
Dec 13 02:28:58 np0005558317 podman[238402]: 2025-12-13 07:28:58.743883805 +0000 UTC m=+0.147219638 container died 6eb5faac63842ff0796c2ec8eebdbcf8a6c64837ec705bb4fda9857b57c6c488 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Dec 13 02:28:58 np0005558317 systemd[1]: libpod-6eb5faac63842ff0796c2ec8eebdbcf8a6c64837ec705bb4fda9857b57c6c488.scope: Deactivated successfully.
Dec 13 02:28:58 np0005558317 systemd[1]: var-lib-containers-storage-overlay-6d57f2c544fa3c93dee744616260d6a57144eddd29abc8bbbe3312af734474cb-merged.mount: Deactivated successfully.
Dec 13 02:28:58 np0005558317 podman[238402]: 2025-12-13 07:28:58.78178487 +0000 UTC m=+0.185120682 container remove 6eb5faac63842ff0796c2ec8eebdbcf8a6c64837ec705bb4fda9857b57c6c488 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_cerf, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:28:58 np0005558317 systemd[1]: libpod-conmon-6eb5faac63842ff0796c2ec8eebdbcf8a6c64837ec705bb4fda9857b57c6c488.scope: Deactivated successfully.
Dec 13 02:29:00 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:02 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:29:04 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:06 np0005558317 podman[238437]: 2025-12-13 07:29:06.056917437 +0000 UTC m=+7.131015176 container create f4c70ed9072a9dccaec7a9d50642c52f6a385a53bf9515dfafc756fd2cc42a36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 02:29:06 np0005558317 podman[238475]: 2025-12-13 07:29:06.071683062 +0000 UTC m=+0.408781954 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 13 02:29:06 np0005558317 podman[238243]: 2025-12-13 07:29:06.088696261 +0000 UTC m=+10.250228585 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b
Dec 13 02:29:06 np0005558317 systemd[1]: Started libpod-conmon-f4c70ed9072a9dccaec7a9d50642c52f6a385a53bf9515dfafc756fd2cc42a36.scope.
Dec 13 02:29:06 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:29:06 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63a2d243eb7604d7a59c8f35917ec9ddfd33340facd340a34cbf51c877db93cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:06 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63a2d243eb7604d7a59c8f35917ec9ddfd33340facd340a34cbf51c877db93cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:06 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63a2d243eb7604d7a59c8f35917ec9ddfd33340facd340a34cbf51c877db93cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:06 np0005558317 podman[238437]: 2025-12-13 07:29:06.038054339 +0000 UTC m=+7.112152107 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:29:06 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63a2d243eb7604d7a59c8f35917ec9ddfd33340facd340a34cbf51c877db93cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:06 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63a2d243eb7604d7a59c8f35917ec9ddfd33340facd340a34cbf51c877db93cb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:06 np0005558317 podman[238437]: 2025-12-13 07:29:06.131749313 +0000 UTC m=+7.205847071 container init f4c70ed9072a9dccaec7a9d50642c52f6a385a53bf9515dfafc756fd2cc42a36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_kowalevski, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 02:29:06 np0005558317 podman[238437]: 2025-12-13 07:29:06.136988314 +0000 UTC m=+7.211086063 container start f4c70ed9072a9dccaec7a9d50642c52f6a385a53bf9515dfafc756fd2cc42a36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_kowalevski, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:29:06 np0005558317 podman[238437]: 2025-12-13 07:29:06.138107659 +0000 UTC m=+7.212205407 container attach f4c70ed9072a9dccaec7a9d50642c52f6a385a53bf9515dfafc756fd2cc42a36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_kowalevski, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:29:06 np0005558317 podman[238518]: 2025-12-13 07:29:06.192227391 +0000 UTC m=+0.027797559 container create 547dab5fa81042d08d1db983dbc1e200a1ce8b11b8dce99b2f25723fa6b8123e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.vendor=CentOS)
Dec 13 02:29:06 np0005558317 podman[238518]: 2025-12-13 07:29:06.179872699 +0000 UTC m=+0.015442888 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b
Dec 13 02:29:06 np0005558317 python3[238232]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec 13 02:29:06 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:06 np0005558317 wizardly_kowalevski[238495]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:29:06 np0005558317 wizardly_kowalevski[238495]: --> All data devices are unavailable
Dec 13 02:29:06 np0005558317 systemd[1]: libpod-f4c70ed9072a9dccaec7a9d50642c52f6a385a53bf9515dfafc756fd2cc42a36.scope: Deactivated successfully.
Dec 13 02:29:06 np0005558317 podman[238437]: 2025-12-13 07:29:06.539487633 +0000 UTC m=+7.613585381 container died f4c70ed9072a9dccaec7a9d50642c52f6a385a53bf9515dfafc756fd2cc42a36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 02:29:06 np0005558317 systemd[1]: var-lib-containers-storage-overlay-63a2d243eb7604d7a59c8f35917ec9ddfd33340facd340a34cbf51c877db93cb-merged.mount: Deactivated successfully.
Dec 13 02:29:06 np0005558317 podman[238437]: 2025-12-13 07:29:06.570372866 +0000 UTC m=+7.644470614 container remove f4c70ed9072a9dccaec7a9d50642c52f6a385a53bf9515dfafc756fd2cc42a36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_kowalevski, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:29:06 np0005558317 systemd[1]: libpod-conmon-f4c70ed9072a9dccaec7a9d50642c52f6a385a53bf9515dfafc756fd2cc42a36.scope: Deactivated successfully.
Dec 13 02:29:06 np0005558317 python3.9[238747]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:29:06 np0005558317 podman[238808]: 2025-12-13 07:29:06.909426028 +0000 UTC m=+0.026116227 container create 6744b0dab89b28e21eeb542c0fb66ecf64b09b7bda8745c47dd12cb81758eaa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_ganguly, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:29:06 np0005558317 systemd[1]: Started libpod-conmon-6744b0dab89b28e21eeb542c0fb66ecf64b09b7bda8745c47dd12cb81758eaa2.scope.
Dec 13 02:29:06 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:29:06 np0005558317 podman[238808]: 2025-12-13 07:29:06.963805758 +0000 UTC m=+0.080495967 container init 6744b0dab89b28e21eeb542c0fb66ecf64b09b7bda8745c47dd12cb81758eaa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 02:29:06 np0005558317 podman[238808]: 2025-12-13 07:29:06.96894483 +0000 UTC m=+0.085635019 container start 6744b0dab89b28e21eeb542c0fb66ecf64b09b7bda8745c47dd12cb81758eaa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_ganguly, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:29:06 np0005558317 podman[238808]: 2025-12-13 07:29:06.970281464 +0000 UTC m=+0.086971673 container attach 6744b0dab89b28e21eeb542c0fb66ecf64b09b7bda8745c47dd12cb81758eaa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_ganguly, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 02:29:06 np0005558317 great_ganguly[238823]: 167 167
Dec 13 02:29:06 np0005558317 systemd[1]: libpod-6744b0dab89b28e21eeb542c0fb66ecf64b09b7bda8745c47dd12cb81758eaa2.scope: Deactivated successfully.
Dec 13 02:29:06 np0005558317 podman[238808]: 2025-12-13 07:29:06.972764984 +0000 UTC m=+0.089455173 container died 6744b0dab89b28e21eeb542c0fb66ecf64b09b7bda8745c47dd12cb81758eaa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:29:06 np0005558317 systemd[1]: var-lib-containers-storage-overlay-acd48edb1638c3f273fa236ff717d84fe465287c205c137dd1c16b08ebabe262-merged.mount: Deactivated successfully.
Dec 13 02:29:06 np0005558317 podman[238808]: 2025-12-13 07:29:06.994124276 +0000 UTC m=+0.110814464 container remove 6744b0dab89b28e21eeb542c0fb66ecf64b09b7bda8745c47dd12cb81758eaa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_ganguly, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 02:29:06 np0005558317 podman[238808]: 2025-12-13 07:29:06.898866402 +0000 UTC m=+0.015556611 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:29:07 np0005558317 systemd[1]: libpod-conmon-6744b0dab89b28e21eeb542c0fb66ecf64b09b7bda8745c47dd12cb81758eaa2.scope: Deactivated successfully.
Dec 13 02:29:07 np0005558317 podman[238845]: 2025-12-13 07:29:07.114043589 +0000 UTC m=+0.028355387 container create 68073674a53a0d39a456638d5351719bde05171dcd3f002fbf2c9864277a590a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 02:29:07 np0005558317 systemd[1]: Started libpod-conmon-68073674a53a0d39a456638d5351719bde05171dcd3f002fbf2c9864277a590a.scope.
Dec 13 02:29:07 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:29:07 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5631f2d8b7a9e7b575ccdf9d4254ee9af09e6532f7fd5abe85cb50ef625f82ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:07 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5631f2d8b7a9e7b575ccdf9d4254ee9af09e6532f7fd5abe85cb50ef625f82ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:07 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5631f2d8b7a9e7b575ccdf9d4254ee9af09e6532f7fd5abe85cb50ef625f82ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:07 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5631f2d8b7a9e7b575ccdf9d4254ee9af09e6532f7fd5abe85cb50ef625f82ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:07 np0005558317 podman[238845]: 2025-12-13 07:29:07.170222502 +0000 UTC m=+0.084534311 container init 68073674a53a0d39a456638d5351719bde05171dcd3f002fbf2c9864277a590a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 02:29:07 np0005558317 podman[238845]: 2025-12-13 07:29:07.17630499 +0000 UTC m=+0.090616777 container start 68073674a53a0d39a456638d5351719bde05171dcd3f002fbf2c9864277a590a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jones, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 02:29:07 np0005558317 podman[238845]: 2025-12-13 07:29:07.177708138 +0000 UTC m=+0.092019946 container attach 68073674a53a0d39a456638d5351719bde05171dcd3f002fbf2c9864277a590a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 02:29:07 np0005558317 podman[238845]: 2025-12-13 07:29:07.102071148 +0000 UTC m=+0.016382946 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]: {
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:    "0": [
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:        {
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "devices": [
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "/dev/loop3"
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            ],
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "lv_name": "ceph_lv0",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "lv_size": "21470642176",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "name": "ceph_lv0",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "tags": {
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.cluster_name": "ceph",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.crush_device_class": "",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.encrypted": "0",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.objectstore": "bluestore",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.osd_id": "0",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.type": "block",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.vdo": "0",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.with_tpm": "0"
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            },
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "type": "block",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "vg_name": "ceph_vg0"
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:        }
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:    ],
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:    "1": [
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:        {
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "devices": [
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "/dev/loop4"
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            ],
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "lv_name": "ceph_lv1",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "lv_size": "21470642176",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "name": "ceph_lv1",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "tags": {
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.cluster_name": "ceph",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.crush_device_class": "",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.encrypted": "0",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.objectstore": "bluestore",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.osd_id": "1",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.type": "block",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.vdo": "0",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.with_tpm": "0"
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            },
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "type": "block",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "vg_name": "ceph_vg1"
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:        }
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:    ],
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:    "2": [
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:        {
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "devices": [
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "/dev/loop5"
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            ],
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "lv_name": "ceph_lv2",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "lv_size": "21470642176",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "name": "ceph_lv2",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "tags": {
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.cluster_name": "ceph",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.crush_device_class": "",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.encrypted": "0",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.objectstore": "bluestore",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.osd_id": "2",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.type": "block",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.vdo": "0",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:                "ceph.with_tpm": "0"
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            },
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "type": "block",
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:            "vg_name": "ceph_vg2"
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:        }
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]:    ]
Dec 13 02:29:07 np0005558317 eloquent_jones[238876]: }
Dec 13 02:29:07 np0005558317 systemd[1]: libpod-68073674a53a0d39a456638d5351719bde05171dcd3f002fbf2c9864277a590a.scope: Deactivated successfully.
Dec 13 02:29:07 np0005558317 conmon[238876]: conmon 68073674a53a0d39a456 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-68073674a53a0d39a456638d5351719bde05171dcd3f002fbf2c9864277a590a.scope/container/memory.events
Dec 13 02:29:07 np0005558317 podman[238845]: 2025-12-13 07:29:07.425277973 +0000 UTC m=+0.339589781 container died 68073674a53a0d39a456638d5351719bde05171dcd3f002fbf2c9864277a590a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 02:29:07 np0005558317 systemd[1]: var-lib-containers-storage-overlay-5631f2d8b7a9e7b575ccdf9d4254ee9af09e6532f7fd5abe85cb50ef625f82ae-merged.mount: Deactivated successfully.
Dec 13 02:29:07 np0005558317 podman[238845]: 2025-12-13 07:29:07.449580819 +0000 UTC m=+0.363892607 container remove 68073674a53a0d39a456638d5351719bde05171dcd3f002fbf2c9864277a590a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jones, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 02:29:07 np0005558317 systemd[1]: libpod-conmon-68073674a53a0d39a456638d5351719bde05171dcd3f002fbf2c9864277a590a.scope: Deactivated successfully.
Dec 13 02:29:07 np0005558317 python3.9[238992]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec 13 02:29:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:29:07 np0005558317 podman[239146]: 2025-12-13 07:29:07.800468844 +0000 UTC m=+0.029971688 container create 3479ccf8cebb1c00379f1f09e8b0c8c39ea4c127efe8a7ba5d0342887efe2343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_poincare, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 02:29:07 np0005558317 systemd[1]: Started libpod-conmon-3479ccf8cebb1c00379f1f09e8b0c8c39ea4c127efe8a7ba5d0342887efe2343.scope.
Dec 13 02:29:07 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:29:07 np0005558317 podman[239146]: 2025-12-13 07:29:07.853526437 +0000 UTC m=+0.083029291 container init 3479ccf8cebb1c00379f1f09e8b0c8c39ea4c127efe8a7ba5d0342887efe2343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_poincare, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:29:07 np0005558317 podman[239146]: 2025-12-13 07:29:07.858487055 +0000 UTC m=+0.087989888 container start 3479ccf8cebb1c00379f1f09e8b0c8c39ea4c127efe8a7ba5d0342887efe2343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_poincare, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:29:07 np0005558317 podman[239146]: 2025-12-13 07:29:07.85959057 +0000 UTC m=+0.089093403 container attach 3479ccf8cebb1c00379f1f09e8b0c8c39ea4c127efe8a7ba5d0342887efe2343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_poincare, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:29:07 np0005558317 hopeful_poincare[239196]: 167 167
Dec 13 02:29:07 np0005558317 systemd[1]: libpod-3479ccf8cebb1c00379f1f09e8b0c8c39ea4c127efe8a7ba5d0342887efe2343.scope: Deactivated successfully.
Dec 13 02:29:07 np0005558317 podman[239146]: 2025-12-13 07:29:07.863301267 +0000 UTC m=+0.092804121 container died 3479ccf8cebb1c00379f1f09e8b0c8c39ea4c127efe8a7ba5d0342887efe2343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_poincare, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 02:29:07 np0005558317 systemd[1]: var-lib-containers-storage-overlay-f4d95096dff4beac66d4555b8c5e77543bcc44fc3780ca85921da1e00571bbfa-merged.mount: Deactivated successfully.
Dec 13 02:29:07 np0005558317 podman[239146]: 2025-12-13 07:29:07.884089445 +0000 UTC m=+0.113592279 container remove 3479ccf8cebb1c00379f1f09e8b0c8c39ea4c127efe8a7ba5d0342887efe2343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_poincare, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 02:29:07 np0005558317 podman[239146]: 2025-12-13 07:29:07.788578817 +0000 UTC m=+0.018081671 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:29:07 np0005558317 systemd[1]: libpod-conmon-3479ccf8cebb1c00379f1f09e8b0c8c39ea4c127efe8a7ba5d0342887efe2343.scope: Deactivated successfully.
Dec 13 02:29:08 np0005558317 podman[239251]: 2025-12-13 07:29:08.008768168 +0000 UTC m=+0.028293561 container create 4ccc05e0f2afbc4570ca5ba20fa61f71dea1f5ef620a1f9d88f0bf43aac2a0d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_black, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 02:29:08 np0005558317 systemd[1]: Started libpod-conmon-4ccc05e0f2afbc4570ca5ba20fa61f71dea1f5ef620a1f9d88f0bf43aac2a0d8.scope.
Dec 13 02:29:08 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:29:08 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b37cf8a6036ab79ead7f23d0a0410ebedb7e5c3a37a32e4feaa10ab44d92f54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:08 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b37cf8a6036ab79ead7f23d0a0410ebedb7e5c3a37a32e4feaa10ab44d92f54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:08 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b37cf8a6036ab79ead7f23d0a0410ebedb7e5c3a37a32e4feaa10ab44d92f54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:08 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b37cf8a6036ab79ead7f23d0a0410ebedb7e5c3a37a32e4feaa10ab44d92f54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:08 np0005558317 podman[239251]: 2025-12-13 07:29:08.069739663 +0000 UTC m=+0.089265056 container init 4ccc05e0f2afbc4570ca5ba20fa61f71dea1f5ef620a1f9d88f0bf43aac2a0d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_black, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:29:08 np0005558317 podman[239251]: 2025-12-13 07:29:08.075211722 +0000 UTC m=+0.094737115 container start 4ccc05e0f2afbc4570ca5ba20fa61f71dea1f5ef620a1f9d88f0bf43aac2a0d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_black, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:29:08 np0005558317 podman[239251]: 2025-12-13 07:29:08.077841005 +0000 UTC m=+0.097366398 container attach 4ccc05e0f2afbc4570ca5ba20fa61f71dea1f5ef620a1f9d88f0bf43aac2a0d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_black, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 02:29:08 np0005558317 python3.9[239245]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 13 02:29:08 np0005558317 podman[239251]: 2025-12-13 07:29:07.997269497 +0000 UTC m=+0.016794900 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:29:08 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:08 np0005558317 lvm[239494]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:29:08 np0005558317 lvm[239493]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:29:08 np0005558317 lvm[239494]: VG ceph_vg1 finished
Dec 13 02:29:08 np0005558317 lvm[239493]: VG ceph_vg0 finished
Dec 13 02:29:08 np0005558317 lvm[239497]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:29:08 np0005558317 lvm[239497]: VG ceph_vg2 finished
Dec 13 02:29:08 np0005558317 reverent_black[239264]: {}
Dec 13 02:29:08 np0005558317 lvm[239499]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:29:08 np0005558317 lvm[239499]: VG ceph_vg2 finished
Dec 13 02:29:08 np0005558317 podman[239251]: 2025-12-13 07:29:08.719389628 +0000 UTC m=+0.738915022 container died 4ccc05e0f2afbc4570ca5ba20fa61f71dea1f5ef620a1f9d88f0bf43aac2a0d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_black, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 02:29:08 np0005558317 systemd[1]: libpod-4ccc05e0f2afbc4570ca5ba20fa61f71dea1f5ef620a1f9d88f0bf43aac2a0d8.scope: Deactivated successfully.
Dec 13 02:29:08 np0005558317 systemd[1]: var-lib-containers-storage-overlay-8b37cf8a6036ab79ead7f23d0a0410ebedb7e5c3a37a32e4feaa10ab44d92f54-merged.mount: Deactivated successfully.
Dec 13 02:29:08 np0005558317 lvm[239502]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:29:08 np0005558317 lvm[239502]: VG ceph_vg2 finished
Dec 13 02:29:08 np0005558317 podman[239251]: 2025-12-13 07:29:08.742245224 +0000 UTC m=+0.761770617 container remove 4ccc05e0f2afbc4570ca5ba20fa61f71dea1f5ef620a1f9d88f0bf43aac2a0d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 02:29:08 np0005558317 systemd[1]: libpod-conmon-4ccc05e0f2afbc4570ca5ba20fa61f71dea1f5ef620a1f9d88f0bf43aac2a0d8.scope: Deactivated successfully.
Dec 13 02:29:08 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:29:08 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:29:08 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:29:08 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:29:08 np0005558317 python3[239483]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 13 02:29:08 np0005558317 podman[239564]: 2025-12-13 07:29:08.956343733 +0000 UTC m=+0.030252505 container create 85647c0fe69de86f6774be6672640c6d932467db58f079f97876cd7dae0a1306 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, org.label-schema.build-date=20251202, config_id=edpm, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:29:08 np0005558317 podman[239564]: 2025-12-13 07:29:08.942722942 +0000 UTC m=+0.016631723 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b
Dec 13 02:29:08 np0005558317 python3[239483]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b kolla_start
Dec 13 02:29:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:29:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:29:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:29:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:29:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:29:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:29:09 np0005558317 python3.9[239742]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:29:09 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:29:09 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:29:10 np0005558317 python3.9[239896]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:29:10 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:10 np0005558317 python3.9[240047]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765610950.1174004-1489-120691869066872/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:29:10 np0005558317 virtnodedevd[204194]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 13 02:29:10 np0005558317 virtnodedevd[204194]: hostname: compute-0
Dec 13 02:29:10 np0005558317 virtnodedevd[204194]: Make forcefull daemon shutdown
Dec 13 02:29:10 np0005558317 systemd[1]: virtnodedevd.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 02:29:10 np0005558317 python3.9[240123]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 02:29:10 np0005558317 systemd[1]: virtnodedevd.service: Failed with result 'exit-code'.
Dec 13 02:29:10 np0005558317 systemd[1]: Reloading.
Dec 13 02:29:11 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:29:11 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:29:11 np0005558317 systemd[1]: virtnodedevd.service: Scheduled restart job, restart counter is at 1.
Dec 13 02:29:11 np0005558317 systemd[1]: Stopped libvirt nodedev daemon.
Dec 13 02:29:11 np0005558317 systemd[1]: Starting libvirt nodedev daemon...
Dec 13 02:29:11 np0005558317 systemd[1]: Started libvirt nodedev daemon.
Dec 13 02:29:11 np0005558317 python3.9[240257]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:29:11 np0005558317 systemd[1]: Reloading.
Dec 13 02:29:11 np0005558317 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:29:11 np0005558317 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:29:11 np0005558317 systemd[1]: Starting nova_compute container...
Dec 13 02:29:12 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:29:12 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b99b624df972157fdfc5e1964b2f648ce2236d9f885434f04fdbb4225095713/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:12 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b99b624df972157fdfc5e1964b2f648ce2236d9f885434f04fdbb4225095713/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:12 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b99b624df972157fdfc5e1964b2f648ce2236d9f885434f04fdbb4225095713/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:12 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b99b624df972157fdfc5e1964b2f648ce2236d9f885434f04fdbb4225095713/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:12 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b99b624df972157fdfc5e1964b2f648ce2236d9f885434f04fdbb4225095713/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:12 np0005558317 podman[240296]: 2025-12-13 07:29:12.027821817 +0000 UTC m=+0.067168519 container init 85647c0fe69de86f6774be6672640c6d932467db58f079f97876cd7dae0a1306 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 02:29:12 np0005558317 podman[240296]: 2025-12-13 07:29:12.034803604 +0000 UTC m=+0.074150306 container start 85647c0fe69de86f6774be6672640c6d932467db58f079f97876cd7dae0a1306 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, org.label-schema.vendor=CentOS, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 02:29:12 np0005558317 podman[240296]: nova_compute
Dec 13 02:29:12 np0005558317 nova_compute[240308]: + sudo -E kolla_set_configs
Dec 13 02:29:12 np0005558317 systemd[1]: Started nova_compute container.
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Validating config file
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Copying service configuration files
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Deleting /etc/ceph
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Creating directory /etc/ceph
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Setting permission for /etc/ceph
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Writing out command to execute
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 13 02:29:12 np0005558317 nova_compute[240308]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 13 02:29:12 np0005558317 nova_compute[240308]: ++ cat /run_command
Dec 13 02:29:12 np0005558317 nova_compute[240308]: + CMD=nova-compute
Dec 13 02:29:12 np0005558317 nova_compute[240308]: + ARGS=
Dec 13 02:29:12 np0005558317 nova_compute[240308]: + sudo kolla_copy_cacerts
Dec 13 02:29:12 np0005558317 nova_compute[240308]: + [[ ! -n '' ]]
Dec 13 02:29:12 np0005558317 nova_compute[240308]: + . kolla_extend_start
Dec 13 02:29:12 np0005558317 nova_compute[240308]: Running command: 'nova-compute'
Dec 13 02:29:12 np0005558317 nova_compute[240308]: + echo 'Running command: '\''nova-compute'\'''
Dec 13 02:29:12 np0005558317 nova_compute[240308]: + umask 0022
Dec 13 02:29:12 np0005558317 nova_compute[240308]: + exec nova-compute
Dec 13 02:29:12 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:12 np0005558317 python3.9[240469]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:29:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:29:13 np0005558317 python3.9[240620]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:29:13 np0005558317 python3.9[240770]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:29:13 np0005558317 nova_compute[240308]: 2025-12-13 07:29:13.845 240312 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec 13 02:29:13 np0005558317 nova_compute[240308]: 2025-12-13 07:29:13.846 240312 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec 13 02:29:13 np0005558317 nova_compute[240308]: 2025-12-13 07:29:13.846 240312 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec 13 02:29:13 np0005558317 nova_compute[240308]: 2025-12-13 07:29:13.846 240312 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec 13 02:29:13 np0005558317 nova_compute[240308]: 2025-12-13 07:29:13.953 240312 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 02:29:13 np0005558317 nova_compute[240308]: 2025-12-13 07:29:13.965 240312 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 02:29:13 np0005558317 nova_compute[240308]: 2025-12-13 07:29:13.965 240312 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.442 240312 INFO nova.virt.driver [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec 13 02:29:14 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.545 240312 INFO nova.compute.provider_config [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec 13 02:29:14 np0005558317 python3.9[240926]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.556 240312 DEBUG oslo_concurrency.lockutils [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 02:29:14 np0005558317 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.556 240312 DEBUG oslo_concurrency.lockutils [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.556 240312 DEBUG oslo_concurrency.lockutils [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.557 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.557 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.557 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.557 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.557 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.557 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.557 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.558 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.558 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.558 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.558 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.558 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.558 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.558 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.559 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.559 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.559 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.559 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.559 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.559 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.559 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.559 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.560 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.560 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.560 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.560 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.560 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.560 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.560 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.561 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.561 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.561 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.561 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.561 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.561 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.561 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.562 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.562 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.562 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.562 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.562 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.562 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.562 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.563 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.563 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.563 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.563 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.563 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.563 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.563 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.564 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.564 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.564 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.564 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.564 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.564 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.564 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.565 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.565 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.565 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.565 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.565 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.565 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.565 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.565 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.566 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.566 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.566 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.566 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.566 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.566 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.566 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.567 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.567 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.567 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.567 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.567 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.567 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.567 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.568 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.568 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.568 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.568 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.568 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.568 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.568 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.569 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.569 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.569 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.569 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.569 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.569 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.569 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.569 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.570 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.570 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.570 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.570 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.570 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.570 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.570 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.571 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.571 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.571 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.571 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.571 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.571 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.571 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.571 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.572 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.572 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.572 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.572 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.572 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.572 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.572 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.572 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.573 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.573 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.573 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.573 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.573 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.573 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.573 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.574 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.574 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.574 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.574 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.574 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.574 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.574 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.574 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.575 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.575 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.575 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.575 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.575 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.575 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.575 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.576 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.576 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.576 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.576 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.576 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.576 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.576 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.576 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.577 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.577 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.577 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.577 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.577 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.577 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.577 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.578 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.578 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.578 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.578 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.578 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.578 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.579 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.579 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.579 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.579 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.579 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.579 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.579 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.579 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.580 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.580 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.580 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.580 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.580 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.580 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.580 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.581 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.581 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.581 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.581 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.581 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.581 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.582 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.582 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.582 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.582 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.582 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.582 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.582 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.583 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.583 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.583 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.583 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.583 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.583 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.583 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.584 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.584 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.584 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.584 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.584 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.584 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.584 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.585 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.585 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.585 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.585 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.585 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.585 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.585 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.586 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.586 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.586 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.586 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.586 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.586 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.586 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.586 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.587 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.587 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.587 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.587 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.587 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.587 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.587 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.588 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.588 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.588 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.588 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.588 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.588 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.588 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.589 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.589 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.589 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.589 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.589 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.589 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.589 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.590 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.590 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.590 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.590 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.590 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.590 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.590 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.591 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.591 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.591 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.591 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.591 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.591 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.591 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.592 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.592 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.592 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.592 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.592 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.592 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.592 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.593 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.593 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.593 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.593 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.593 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.593 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.593 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.594 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.594 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.594 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.594 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.594 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.594 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.594 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.594 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.595 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.595 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.595 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.595 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.595 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.595 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.595 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.596 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.596 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.596 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.596 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.596 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.596 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.596 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.596 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.597 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.597 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.597 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.597 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.597 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.597 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.597 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.597 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.598 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.598 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.598 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.598 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.598 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.598 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.598 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.599 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.599 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.599 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.599 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.599 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.599 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.599 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.599 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.600 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.600 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.600 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.600 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.600 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.600 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.600 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.601 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.601 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.601 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.601 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.601 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.601 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.601 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.601 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.602 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.602 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.602 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.602 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.602 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.602 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.602 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.602 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.603 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.603 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.603 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.603 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.603 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.603 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.603 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.604 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.604 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.604 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.604 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.604 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.604 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.604 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.605 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.605 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.605 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.605 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.605 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.605 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.606 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.606 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.606 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.606 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.606 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.606 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.606 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.607 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.607 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.607 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.607 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.607 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.607 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.607 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.608 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.608 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.608 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.608 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.608 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.608 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.608 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.608 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.609 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.609 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.609 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.609 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.609 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.609 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.609 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.610 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.610 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.610 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.610 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.610 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.610 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.610 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.611 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.611 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.611 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.611 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.611 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.611 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.611 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.612 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.612 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.612 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.612 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.612 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.612 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.612 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.613 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.613 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.613 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.613 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.613 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.613 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.613 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.613 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.614 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.614 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.614 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.614 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.614 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.614 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.614 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.615 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.615 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.615 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.615 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.615 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.615 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.615 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.616 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.616 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.616 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.616 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.616 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.616 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.616 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.616 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.617 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.617 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.617 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.617 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.617 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.617 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.617 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.618 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.618 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.618 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.618 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.618 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.618 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.618 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.619 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.619 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.619 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.619 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.619 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.619 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.619 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.620 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.620 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.620 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.620 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.620 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.620 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.620 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.621 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.621 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.621 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.621 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.621 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.621 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.621 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.621 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.622 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.622 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.622 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.622 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.622 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.622 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.622 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.623 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.623 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.623 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.623 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.623 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.623 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.623 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.624 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.624 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.624 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.624 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.624 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.624 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.624 240312 WARNING oslo_config.cfg [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 13 02:29:14 np0005558317 nova_compute[240308]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 13 02:29:14 np0005558317 nova_compute[240308]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 13 02:29:14 np0005558317 nova_compute[240308]: and ``live_migration_inbound_addr`` respectively.
Dec 13 02:29:14 np0005558317 nova_compute[240308]: ).  Its value may be silently ignored in the future.#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.625 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.625 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.625 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.625 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.625 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.625 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.626 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.626 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.626 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.626 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.626 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.626 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.626 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.627 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.627 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.627 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.627 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.627 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.627 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.rbd_secret_uuid        = 00fdae1b-7fad-5f1b-8734-ba4d9298a6de log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.627 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.628 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.628 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.628 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.628 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.628 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.628 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.628 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.629 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.629 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.629 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.629 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.629 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.629 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.630 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.630 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.630 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.630 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.630 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.630 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.630 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.631 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.631 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.631 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.631 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.631 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.631 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.631 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.631 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.632 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.632 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.632 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.632 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.632 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.632 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.633 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.633 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.633 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.633 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.633 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.633 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.633 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.633 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.634 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.634 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.634 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.634 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.634 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.634 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.634 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.635 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.635 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.635 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.635 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.635 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.635 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.635 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.635 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.636 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.636 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.636 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.636 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.636 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.636 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.636 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.637 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.637 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.637 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.637 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.637 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.637 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.638 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.638 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.638 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.638 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.638 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.638 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.639 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.639 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.639 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.639 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.639 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.639 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.640 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.640 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.640 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.640 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.640 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.640 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.640 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.641 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.641 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.641 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.641 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.641 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.641 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.641 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.642 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.642 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.642 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.642 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.642 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.642 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.642 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.643 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.643 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.643 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.643 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.643 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.643 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.643 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.644 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.644 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.644 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.644 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.644 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.644 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.644 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.644 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.645 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.645 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.645 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.646 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.646 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.646 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.646 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.646 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.646 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.646 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.647 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.647 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.647 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.647 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.647 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.647 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.647 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.648 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.648 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.648 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.648 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.648 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.648 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.649 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.649 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.649 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.649 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.649 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.649 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.649 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.649 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.650 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.650 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.650 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.650 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.650 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.650 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.650 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.651 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.651 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.651 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.651 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.651 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.651 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.652 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.652 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.652 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.652 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.652 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.652 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.652 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.652 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.653 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.653 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.653 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.653 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.653 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.653 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.654 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.654 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.654 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.654 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.654 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.654 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.654 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.655 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.655 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.657 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.657 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.657 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.657 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.657 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.658 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.658 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.658 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.658 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.658 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.658 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.659 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.659 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.659 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.659 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.659 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.659 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.659 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.660 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.660 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.660 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.660 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.660 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.660 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.660 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.660 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.661 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.661 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.661 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.661 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.661 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.661 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.662 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.662 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.662 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.662 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.662 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.662 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.662 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.663 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.663 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.663 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.663 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.663 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.663 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.664 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.664 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.664 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.664 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.664 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.664 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.664 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.665 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.665 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.665 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.665 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.665 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.665 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.665 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.666 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.666 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.666 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.666 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.666 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.666 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.666 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.666 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.667 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.667 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.667 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.667 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.667 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.667 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.667 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.668 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.668 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.668 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.668 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.668 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.668 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.668 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.669 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.669 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.669 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.669 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.669 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.669 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.669 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.670 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.670 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.670 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.670 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.670 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.670 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.671 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.671 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.671 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.671 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.671 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.671 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.671 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.672 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.672 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.672 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.672 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.672 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.672 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.672 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.673 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.673 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.673 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.673 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.673 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.673 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.673 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.673 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.674 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.674 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.674 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.674 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.674 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.674 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.674 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.675 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.675 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.675 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.675 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.675 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.675 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.675 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.676 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.676 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.676 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.676 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.676 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.676 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.676 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.677 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.677 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.677 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.677 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.677 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.677 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.677 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.678 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.678 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.678 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.678 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.678 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.678 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.678 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.678 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.679 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.679 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.679 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.679 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.679 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.679 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.679 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.679 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.680 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.680 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.680 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.680 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.680 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.680 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.680 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.681 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.681 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.681 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.681 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.681 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.681 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.681 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.681 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.682 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.682 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.682 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.682 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.682 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.682 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.682 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.683 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.683 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.683 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.683 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.683 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.683 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.683 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.684 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.684 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.684 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.684 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.684 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.684 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.684 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.685 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.685 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.685 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.685 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.685 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.685 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.685 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.686 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.686 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.686 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.686 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.686 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.686 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.686 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.686 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.687 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.687 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.687 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.687 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.687 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.687 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.687 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.688 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.688 240312 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.697 240312 DEBUG nova.virt.libvirt.host [None req-4f209c23-1ed1-4eec-a54b-5746b57a65ab - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec 13 02:29:14 np0005558317 podman[240944]: 2025-12-13 07:29:14.699137807 +0000 UTC m=+0.045940964 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.698 240312 DEBUG nova.virt.libvirt.host [None req-4f209c23-1ed1-4eec-a54b-5746b57a65ab - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.698 240312 DEBUG nova.virt.libvirt.host [None req-4f209c23-1ed1-4eec-a54b-5746b57a65ab - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.698 240312 DEBUG nova.virt.libvirt.host [None req-4f209c23-1ed1-4eec-a54b-5746b57a65ab - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec 13 02:29:14 np0005558317 systemd[1]: Starting libvirt QEMU daemon...
Dec 13 02:29:14 np0005558317 systemd[1]: Started libvirt QEMU daemon.
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.748 240312 DEBUG nova.virt.libvirt.host [None req-4f209c23-1ed1-4eec-a54b-5746b57a65ab - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f29569fad60> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.750 240312 DEBUG nova.virt.libvirt.host [None req-4f209c23-1ed1-4eec-a54b-5746b57a65ab - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f29569fad60> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.750 240312 INFO nova.virt.libvirt.driver [None req-4f209c23-1ed1-4eec-a54b-5746b57a65ab - - - - - -] Connection event '1' reason 'None'#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.762 240312 WARNING nova.virt.libvirt.driver [None req-4f209c23-1ed1-4eec-a54b-5746b57a65ab - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec 13 02:29:14 np0005558317 nova_compute[240308]: 2025-12-13 07:29:14.762 240312 DEBUG nova.virt.libvirt.volume.mount [None req-4f209c23-1ed1-4eec-a54b-5746b57a65ab - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec 13 02:29:15 np0005558317 python3.9[241165]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:29:15 np0005558317 systemd[1]: Stopping nova_compute container...
Dec 13 02:29:15 np0005558317 nova_compute[240308]: 2025-12-13 07:29:15.300 240312 DEBUG oslo_concurrency.lockutils [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 02:29:15 np0005558317 nova_compute[240308]: 2025-12-13 07:29:15.300 240312 DEBUG oslo_concurrency.lockutils [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 02:29:15 np0005558317 nova_compute[240308]: 2025-12-13 07:29:15.300 240312 DEBUG oslo_concurrency.lockutils [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 02:29:15 np0005558317 virtqemud[241006]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 13 02:29:15 np0005558317 virtqemud[241006]: hostname: compute-0
Dec 13 02:29:15 np0005558317 virtqemud[241006]: End of file while reading data: Input/output error
Dec 13 02:29:15 np0005558317 systemd[1]: libpod-85647c0fe69de86f6774be6672640c6d932467db58f079f97876cd7dae0a1306.scope: Deactivated successfully.
Dec 13 02:29:15 np0005558317 podman[241177]: 2025-12-13 07:29:15.698001979 +0000 UTC m=+0.423056443 container died 85647c0fe69de86f6774be6672640c6d932467db58f079f97876cd7dae0a1306 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 13 02:29:15 np0005558317 systemd[1]: libpod-85647c0fe69de86f6774be6672640c6d932467db58f079f97876cd7dae0a1306.scope: Consumed 2.424s CPU time.
Dec 13 02:29:15 np0005558317 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-85647c0fe69de86f6774be6672640c6d932467db58f079f97876cd7dae0a1306-userdata-shm.mount: Deactivated successfully.
Dec 13 02:29:15 np0005558317 systemd[1]: var-lib-containers-storage-overlay-6b99b624df972157fdfc5e1964b2f648ce2236d9f885434f04fdbb4225095713-merged.mount: Deactivated successfully.
Dec 13 02:29:16 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:17 np0005558317 podman[241177]: 2025-12-13 07:29:17.521266579 +0000 UTC m=+2.246321043 container cleanup 85647c0fe69de86f6774be6672640c6d932467db58f079f97876cd7dae0a1306 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 13 02:29:17 np0005558317 podman[241177]: nova_compute
Dec 13 02:29:17 np0005558317 podman[241200]: nova_compute
Dec 13 02:29:17 np0005558317 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec 13 02:29:17 np0005558317 systemd[1]: Stopped nova_compute container.
Dec 13 02:29:17 np0005558317 systemd[1]: Starting nova_compute container...
Dec 13 02:29:17 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:29:17 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b99b624df972157fdfc5e1964b2f648ce2236d9f885434f04fdbb4225095713/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:17 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b99b624df972157fdfc5e1964b2f648ce2236d9f885434f04fdbb4225095713/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:17 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b99b624df972157fdfc5e1964b2f648ce2236d9f885434f04fdbb4225095713/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:17 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b99b624df972157fdfc5e1964b2f648ce2236d9f885434f04fdbb4225095713/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:17 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b99b624df972157fdfc5e1964b2f648ce2236d9f885434f04fdbb4225095713/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:17 np0005558317 podman[241209]: 2025-12-13 07:29:17.654301554 +0000 UTC m=+0.069386768 container init 85647c0fe69de86f6774be6672640c6d932467db58f079f97876cd7dae0a1306 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251202)
Dec 13 02:29:17 np0005558317 podman[241209]: 2025-12-13 07:29:17.658708691 +0000 UTC m=+0.073793905 container start 85647c0fe69de86f6774be6672640c6d932467db58f079f97876cd7dae0a1306 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 02:29:17 np0005558317 podman[241209]: nova_compute
Dec 13 02:29:17 np0005558317 nova_compute[241222]: + sudo -E kolla_set_configs
Dec 13 02:29:17 np0005558317 systemd[1]: Started nova_compute container.
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Validating config file
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Copying service configuration files
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Deleting /etc/ceph
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Creating directory /etc/ceph
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Setting permission for /etc/ceph
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Writing out command to execute
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 13 02:29:17 np0005558317 nova_compute[241222]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 13 02:29:17 np0005558317 nova_compute[241222]: ++ cat /run_command
Dec 13 02:29:17 np0005558317 nova_compute[241222]: + CMD=nova-compute
Dec 13 02:29:17 np0005558317 nova_compute[241222]: + ARGS=
Dec 13 02:29:17 np0005558317 nova_compute[241222]: + sudo kolla_copy_cacerts
Dec 13 02:29:17 np0005558317 nova_compute[241222]: Running command: 'nova-compute'
Dec 13 02:29:17 np0005558317 nova_compute[241222]: + [[ ! -n '' ]]
Dec 13 02:29:17 np0005558317 nova_compute[241222]: + . kolla_extend_start
Dec 13 02:29:17 np0005558317 nova_compute[241222]: + echo 'Running command: '\''nova-compute'\'''
Dec 13 02:29:17 np0005558317 nova_compute[241222]: + umask 0022
Dec 13 02:29:17 np0005558317 nova_compute[241222]: + exec nova-compute
Dec 13 02:29:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:29:18 np0005558317 python3.9[241385]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 13 02:29:18 np0005558317 systemd[1]: Started libpod-conmon-547dab5fa81042d08d1db983dbc1e200a1ce8b11b8dce99b2f25723fa6b8123e.scope.
Dec 13 02:29:18 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:29:18 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb34115f18d67d1db81bec5df2bdc2ae866c8f99644700a5584e1256100a924/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:18 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb34115f18d67d1db81bec5df2bdc2ae866c8f99644700a5584e1256100a924/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:18 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb34115f18d67d1db81bec5df2bdc2ae866c8f99644700a5584e1256100a924/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 13 02:29:18 np0005558317 podman[241404]: 2025-12-13 07:29:18.368301657 +0000 UTC m=+0.087616576 container init 547dab5fa81042d08d1db983dbc1e200a1ce8b11b8dce99b2f25723fa6b8123e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Dec 13 02:29:18 np0005558317 podman[241404]: 2025-12-13 07:29:18.373203604 +0000 UTC m=+0.092518524 container start 547dab5fa81042d08d1db983dbc1e200a1ce8b11b8dce99b2f25723fa6b8123e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 02:29:18 np0005558317 python3.9[241385]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec 13 02:29:18 np0005558317 nova_compute_init[241423]: INFO:nova_statedir:Applying nova statedir ownership
Dec 13 02:29:18 np0005558317 nova_compute_init[241423]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec 13 02:29:18 np0005558317 nova_compute_init[241423]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec 13 02:29:18 np0005558317 nova_compute_init[241423]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec 13 02:29:18 np0005558317 nova_compute_init[241423]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec 13 02:29:18 np0005558317 nova_compute_init[241423]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec 13 02:29:18 np0005558317 nova_compute_init[241423]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec 13 02:29:18 np0005558317 nova_compute_init[241423]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec 13 02:29:18 np0005558317 nova_compute_init[241423]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec 13 02:29:18 np0005558317 nova_compute_init[241423]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec 13 02:29:18 np0005558317 nova_compute_init[241423]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec 13 02:29:18 np0005558317 nova_compute_init[241423]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec 13 02:29:18 np0005558317 nova_compute_init[241423]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec 13 02:29:18 np0005558317 nova_compute_init[241423]: INFO:nova_statedir:Nova statedir ownership complete
Dec 13 02:29:18 np0005558317 systemd[1]: libpod-547dab5fa81042d08d1db983dbc1e200a1ce8b11b8dce99b2f25723fa6b8123e.scope: Deactivated successfully.
Dec 13 02:29:18 np0005558317 podman[241435]: 2025-12-13 07:29:18.454672721 +0000 UTC m=+0.025737776 container died 547dab5fa81042d08d1db983dbc1e200a1ce8b11b8dce99b2f25723fa6b8123e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 02:29:18 np0005558317 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-547dab5fa81042d08d1db983dbc1e200a1ce8b11b8dce99b2f25723fa6b8123e-userdata-shm.mount: Deactivated successfully.
Dec 13 02:29:18 np0005558317 podman[241435]: 2025-12-13 07:29:18.468223509 +0000 UTC m=+0.039288545 container cleanup 547dab5fa81042d08d1db983dbc1e200a1ce8b11b8dce99b2f25723fa6b8123e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible)
Dec 13 02:29:18 np0005558317 systemd[1]: libpod-conmon-547dab5fa81042d08d1db983dbc1e200a1ce8b11b8dce99b2f25723fa6b8123e.scope: Deactivated successfully.
Dec 13 02:29:18 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:18 np0005558317 systemd[1]: var-lib-containers-storage-overlay-4eb34115f18d67d1db81bec5df2bdc2ae866c8f99644700a5584e1256100a924-merged.mount: Deactivated successfully.
Dec 13 02:29:18 np0005558317 systemd[1]: session-52.scope: Deactivated successfully.
Dec 13 02:29:18 np0005558317 systemd[1]: session-52.scope: Consumed 1min 39.414s CPU time.
Dec 13 02:29:18 np0005558317 systemd-logind[745]: Session 52 logged out. Waiting for processes to exit.
Dec 13 02:29:18 np0005558317 systemd-logind[745]: Removed session 52.
Dec 13 02:29:19 np0005558317 nova_compute[241222]: 2025-12-13 07:29:19.435 241226 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec 13 02:29:19 np0005558317 nova_compute[241222]: 2025-12-13 07:29:19.436 241226 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec 13 02:29:19 np0005558317 nova_compute[241222]: 2025-12-13 07:29:19.436 241226 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec 13 02:29:19 np0005558317 nova_compute[241222]: 2025-12-13 07:29:19.436 241226 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec 13 02:29:19 np0005558317 nova_compute[241222]: 2025-12-13 07:29:19.549 241226 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 02:29:19 np0005558317 nova_compute[241222]: 2025-12-13 07:29:19.559 241226 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 02:29:19 np0005558317 nova_compute[241222]: 2025-12-13 07:29:19.559 241226 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec 13 02:29:19 np0005558317 nova_compute[241222]: 2025-12-13 07:29:19.928 241226 INFO nova.virt.driver [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.016 241226 INFO nova.compute.provider_config [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.026 241226 DEBUG oslo_concurrency.lockutils [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.026 241226 DEBUG oslo_concurrency.lockutils [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.026 241226 DEBUG oslo_concurrency.lockutils [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.026 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.027 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.027 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.027 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.027 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.027 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.027 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.027 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.028 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.028 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.028 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.028 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.028 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.028 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.028 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.029 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.029 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.029 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.029 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.029 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.029 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.029 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.030 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.030 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.030 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.030 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.030 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.030 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.030 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.031 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.031 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.031 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.031 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.031 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.031 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.031 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.032 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.032 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.032 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.032 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.032 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.032 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.033 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.033 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.033 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.033 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.033 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.033 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.033 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.034 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.034 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.034 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.034 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.034 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.034 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.034 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.034 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.035 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.035 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.035 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.035 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.035 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.035 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.035 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.036 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.036 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.036 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.036 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.036 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.036 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.036 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.036 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.037 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.037 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.037 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.037 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.037 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.037 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.037 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.038 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.038 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.038 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.038 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.038 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.038 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.038 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.039 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.039 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.039 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.039 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.039 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.039 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.039 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.039 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.040 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.040 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.040 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.040 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.040 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.040 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.040 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.041 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.041 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.041 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.041 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.041 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.041 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.041 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.041 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.042 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.042 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.042 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.042 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.042 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.042 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.042 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.043 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.043 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.043 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.043 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.043 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.043 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.044 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.044 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.044 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.044 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.044 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.044 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.044 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.044 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.045 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.045 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.045 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.045 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.045 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.045 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.045 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.046 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.046 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.046 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.046 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.046 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.046 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.046 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.047 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.047 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.047 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.047 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.047 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.047 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.047 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.048 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.048 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.048 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.048 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.048 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.048 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.048 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.049 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.049 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.049 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.049 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.049 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.049 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.049 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.050 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.050 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.050 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.050 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.050 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.050 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.050 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.051 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.051 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.051 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.051 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.051 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.051 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.051 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.052 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.052 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.052 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.052 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.052 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.052 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.052 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.053 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.053 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.053 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.053 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.053 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.053 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.053 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.053 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.054 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.054 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.054 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.054 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.054 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.054 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.054 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.055 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.055 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.055 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.055 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.055 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.055 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.055 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.056 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.056 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.056 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.056 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.056 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.056 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.056 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.056 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.057 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.057 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.057 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.057 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.057 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.057 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.057 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.058 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.058 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.058 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.058 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.058 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.058 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.058 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.059 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.059 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.059 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.059 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.059 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.059 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.059 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.059 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.060 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.060 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.060 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.060 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.060 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.060 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.060 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.061 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.061 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.061 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.061 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.061 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.061 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.061 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.062 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.062 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.062 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.062 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.062 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.062 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.062 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.063 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.063 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.063 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.063 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.063 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.063 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.063 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.063 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.064 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.064 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.064 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.064 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.064 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.064 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.064 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.065 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.065 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.065 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.065 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.065 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.065 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.065 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.066 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.066 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.066 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.066 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.066 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.066 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.066 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.067 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.067 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.067 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.067 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.067 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.067 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.067 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.068 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.068 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.068 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.068 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.068 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.068 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.068 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.069 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.069 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.069 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.069 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.069 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.069 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.069 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.070 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.070 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.070 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.070 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.070 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.070 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.070 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.071 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.071 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.071 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.071 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.071 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.071 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.071 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.071 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.072 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.072 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.072 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.072 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.072 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.072 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.072 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.073 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.073 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.073 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.073 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.073 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.073 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.073 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.074 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.074 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.074 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.074 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.074 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.074 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.074 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.075 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.075 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.075 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.075 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.075 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.075 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.075 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.075 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.076 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.076 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.076 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.076 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.076 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.077 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.077 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.077 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.077 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.077 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.077 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.077 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.078 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.078 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.078 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.078 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.078 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.078 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.078 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.079 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.079 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.079 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.079 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.079 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.079 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.079 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.080 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.080 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.080 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.080 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.080 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.080 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.080 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.081 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.081 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.081 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.081 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.081 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.081 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.081 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.082 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.082 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.082 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.082 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.082 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.082 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.082 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.083 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.083 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.083 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.083 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.083 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.083 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.083 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.083 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.084 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.084 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.084 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.084 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.084 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.084 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.084 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.085 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.085 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.085 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.085 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.085 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.085 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.085 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.086 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.086 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.086 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.086 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.086 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.086 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.086 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.087 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.087 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.087 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.087 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.087 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.087 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.087 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.088 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.088 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.088 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.088 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.088 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.088 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.088 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.089 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.089 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.089 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.089 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.089 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.089 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.089 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.089 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.090 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.090 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.090 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.090 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.090 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.090 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.091 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.091 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.091 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.091 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.091 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.091 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.091 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.092 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.092 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.092 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.092 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.092 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.092 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.093 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.093 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.093 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.093 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.093 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.093 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.093 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.094 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.094 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.094 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.094 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.094 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.094 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.095 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.095 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.095 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.095 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.095 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.095 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.095 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.096 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.096 241226 WARNING oslo_config.cfg [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 13 02:29:20 np0005558317 nova_compute[241222]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 13 02:29:20 np0005558317 nova_compute[241222]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 13 02:29:20 np0005558317 nova_compute[241222]: and ``live_migration_inbound_addr`` respectively.
Dec 13 02:29:20 np0005558317 nova_compute[241222]: ).  Its value may be silently ignored in the future.#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.096 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.096 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.096 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.097 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.097 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.097 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.097 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.097 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.097 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.097 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.098 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.098 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.098 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.098 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.098 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.098 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.098 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.099 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.099 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.rbd_secret_uuid        = 00fdae1b-7fad-5f1b-8734-ba4d9298a6de log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.099 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.099 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.099 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.099 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.099 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.100 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.100 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.100 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.100 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.100 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.100 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.100 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.101 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.101 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.101 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.101 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.101 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.101 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.102 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.102 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.102 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.102 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.102 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.102 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.102 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.103 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.103 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.103 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.103 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.103 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.103 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.103 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.104 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.104 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.104 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.104 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.104 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.104 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.104 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.105 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.105 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.105 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.105 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.105 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.105 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.105 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.105 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.106 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.106 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.106 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.106 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.106 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.106 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.106 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.107 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.107 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.107 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.107 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.107 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.107 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.107 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.108 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.108 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.108 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.108 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.108 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.108 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.108 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.109 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.109 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.109 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.109 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.109 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.109 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.109 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.110 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.110 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.110 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.110 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.110 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.110 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.110 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.110 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.111 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.111 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.111 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.111 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.111 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.111 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.111 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.112 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.112 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.112 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.112 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.112 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.112 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.112 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.113 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.113 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.113 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.113 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.113 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.113 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.113 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.113 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.114 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.114 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.114 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.114 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.114 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.114 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.114 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.115 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.115 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.115 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.115 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.115 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.115 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.115 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.116 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.116 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.116 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.116 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.116 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.116 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.117 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.117 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.117 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.117 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.117 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.117 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.117 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.118 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.118 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.118 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.118 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.118 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.118 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.118 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.119 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.119 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.119 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.119 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.119 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.119 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.119 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.120 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.120 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.120 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.120 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.120 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.120 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.120 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.121 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.121 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.121 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.121 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.121 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.121 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.121 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.122 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.122 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.122 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.122 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.122 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.122 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.122 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.123 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.123 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.123 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.123 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.123 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.123 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.123 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.124 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.124 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.124 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.124 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.124 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.124 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.125 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.125 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.125 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.125 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.125 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.125 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.125 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.126 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.126 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.126 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.126 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.126 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.126 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.126 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.126 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.127 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.127 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.127 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.127 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.127 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.127 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.127 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.128 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.128 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.128 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.128 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.128 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.128 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.128 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.128 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.129 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.129 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.129 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.129 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.129 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.129 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.129 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.130 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.130 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.130 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.130 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.130 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.130 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.130 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.131 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.131 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.131 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.131 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.131 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.131 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.132 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.132 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.132 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.132 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.132 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.132 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.133 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.133 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.133 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.133 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.133 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.133 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.133 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.133 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.134 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.134 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.134 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.134 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.134 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.134 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.134 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.135 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.135 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.135 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.135 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.135 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.135 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.135 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.136 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.136 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.136 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.136 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.136 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.136 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.136 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.137 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.137 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.137 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.137 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.137 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.137 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.137 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.138 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.138 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.138 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.138 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.138 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.138 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.138 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.139 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.139 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.139 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.139 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.139 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.139 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.139 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.140 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.140 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.140 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.140 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.140 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.140 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.140 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.141 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.141 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.141 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.141 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.141 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.141 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.141 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.141 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.142 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.142 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.142 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.142 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.142 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.142 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.142 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.143 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.143 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.143 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.143 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.143 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.143 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.143 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.144 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.144 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.144 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.144 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.144 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.144 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.144 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.145 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.145 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.145 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.145 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.145 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.145 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.145 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.146 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.146 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.146 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.146 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.146 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.146 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.146 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.147 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.147 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.147 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.147 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.147 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.147 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.147 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.147 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.148 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.148 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.148 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.148 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.148 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.148 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.148 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.149 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.149 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.149 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.149 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.149 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.149 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.149 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.149 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.150 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.150 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.150 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.150 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.150 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.150 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.150 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.151 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.151 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.151 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.151 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.151 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.151 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.151 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.152 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.152 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.152 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.152 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.152 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.152 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.152 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.153 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.153 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.153 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.153 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.153 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.153 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.153 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.154 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.154 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.154 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.154 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.154 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.155 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.155 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.155 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.155 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.155 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.155 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.155 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.156 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.156 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.156 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.156 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.156 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.156 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.157 241226 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.180 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.181 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.181 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.181 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.190 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f4341865be0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.192 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f4341865be0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.192 241226 INFO nova.virt.libvirt.driver [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Connection event '1' reason 'None'#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.196 241226 INFO nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Libvirt host capabilities <capabilities>
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <host>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <uuid>bdf0d7c0-5eef-46ac-89a1-b1ab7cc430f1</uuid>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <cpu>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <arch>x86_64</arch>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model>EPYC-Milan-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <vendor>AMD</vendor>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <microcode version='167776725'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <signature family='25' model='1' stepping='1'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <topology sockets='4' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <maxphysaddr mode='emulate' bits='48'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature name='x2apic'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature name='tsc-deadline'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature name='osxsave'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature name='hypervisor'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature name='tsc_adjust'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature name='ospke'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature name='vaes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature name='vpclmulqdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature name='spec-ctrl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature name='stibp'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature name='arch-capabilities'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature name='ssbd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature name='cmp_legacy'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature name='virt-ssbd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature name='lbrv'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature name='tsc-scale'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature name='vmcb-clean'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature name='pause-filter'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature name='pfthreshold'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature name='v-vmsave-vmload'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature name='vgif'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature name='rdctl-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature name='skip-l1dfl-vmentry'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature name='mds-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature name='pschange-mc-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <pages unit='KiB' size='4'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <pages unit='KiB' size='2048'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <pages unit='KiB' size='1048576'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </cpu>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <power_management>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <suspend_mem/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </power_management>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <iommu support='no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <migration_features>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <live/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <uri_transports>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <uri_transport>tcp</uri_transport>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <uri_transport>rdma</uri_transport>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </uri_transports>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </migration_features>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <topology>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <cells num='1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <cell id='0'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:          <memory unit='KiB'>7865356</memory>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:          <pages unit='KiB' size='4'>1966339</pages>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:          <pages unit='KiB' size='2048'>0</pages>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:          <pages unit='KiB' size='1048576'>0</pages>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:          <distances>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:            <sibling id='0' value='10'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:          </distances>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:          <cpus num='4'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:          </cpus>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        </cell>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </cells>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </topology>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <cache>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </cache>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <secmodel>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model>selinux</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <doi>0</doi>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </secmodel>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <secmodel>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model>dac</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <doi>0</doi>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </secmodel>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  </host>
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <guest>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <os_type>hvm</os_type>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <arch name='i686'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <wordsize>32</wordsize>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <domain type='qemu'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <domain type='kvm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </arch>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <features>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <pae/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <nonpae/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <acpi default='on' toggle='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <apic default='on' toggle='no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <cpuselection/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <deviceboot/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <disksnapshot default='on' toggle='no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <externalSnapshot/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </features>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  </guest>
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <guest>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <os_type>hvm</os_type>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <arch name='x86_64'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <wordsize>64</wordsize>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <domain type='qemu'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <domain type='kvm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </arch>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <features>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <acpi default='on' toggle='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <apic default='on' toggle='no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <cpuselection/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <deviceboot/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <disksnapshot default='on' toggle='no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <externalSnapshot/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </features>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  </guest>
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 
Dec 13 02:29:20 np0005558317 nova_compute[241222]: </capabilities>
Dec 13 02:29:20 np0005558317 nova_compute[241222]: #033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.201 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.203 241226 WARNING nova.virt.libvirt.driver [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.203 241226 DEBUG nova.virt.libvirt.volume.mount [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.217 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 13 02:29:20 np0005558317 nova_compute[241222]: <domainCapabilities>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <path>/usr/libexec/qemu-kvm</path>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <domain>kvm</domain>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <machine>pc-q35-rhel9.8.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <arch>i686</arch>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <vcpu max='4096'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <iothreads supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <os supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <enum name='firmware'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <loader supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='type'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>rom</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>pflash</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='readonly'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>yes</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>no</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='secure'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>no</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </loader>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  </os>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <cpu>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <mode name='host-passthrough' supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='hostPassthroughMigratable'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>on</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>off</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </mode>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <mode name='maximum' supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='maximumMigratable'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>on</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>off</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </mode>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <mode name='host-model' supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model fallback='forbid'>EPYC-Milan</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <vendor>AMD</vendor>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <maxphysaddr mode='passthrough' limit='48'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='x2apic'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='tsc-deadline'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='hypervisor'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='tsc_adjust'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='vaes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='vpclmulqdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='spec-ctrl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='stibp'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='ssbd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='cmp_legacy'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='overflow-recov'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='succor'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='virt-ssbd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='lbrv'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='tsc-scale'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='vmcb-clean'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='flushbyasid'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='pause-filter'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='pfthreshold'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='v-vmsave-vmload'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='vgif'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='lfence-always-serializing'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </mode>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <mode name='custom' supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Broadwell'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Broadwell-IBRS'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Broadwell-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Broadwell-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server-noTSX'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server-v4'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server-v5'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cooperlake'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cooperlake-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cooperlake-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Denverton'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mpx'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Denverton-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mpx'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='EPYC-Genoa'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amd-psfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='auto-ibrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='no-nested-data-bp'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='null-sel-clr-base'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='stibp-always-on'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='EPYC-Genoa-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amd-psfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='auto-ibrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='no-nested-data-bp'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='null-sel-clr-base'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='stibp-always-on'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='EPYC-Milan-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amd-psfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='no-nested-data-bp'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='null-sel-clr-base'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='stibp-always-on'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='GraniteRapids'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mcdt-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='pbrsb-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='prefetchiti'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='GraniteRapids-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mcdt-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='pbrsb-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='prefetchiti'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='GraniteRapids-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx10'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx10-128'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx10-256'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx10-512'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mcdt-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='pbrsb-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='prefetchiti'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Haswell'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Haswell-IBRS'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Haswell-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Haswell-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-noTSX'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v4'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v5'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v6'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v7'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='KnightsMill'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-4fmaps'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-4vnniw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512er'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512pf'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='KnightsMill-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-4fmaps'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-4vnniw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512er'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512pf'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Opteron_G4'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fma4'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xop'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Opteron_G4-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fma4'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xop'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Opteron_G5'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fma4'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tbm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xop'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Opteron_G5-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fma4'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tbm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xop'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='SapphireRapids'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='SapphireRapids-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='SapphireRapids-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='SapphireRapids-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='SierraForest'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-ne-convert'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cmpccxadd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mcdt-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='pbrsb-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='SierraForest-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-ne-convert'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cmpccxadd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mcdt-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='pbrsb-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Client'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Client-IBRS'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Client-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Client-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-IBRS'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-v4'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-v5'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Snowridge'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='core-capability'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mpx'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='split-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Snowridge-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='core-capability'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mpx'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='split-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Snowridge-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='core-capability'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='split-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Snowridge-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='core-capability'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='split-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Snowridge-v4'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='athlon'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnow'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnowext'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='athlon-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnow'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnowext'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='core2duo'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='core2duo-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='coreduo'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='coreduo-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='n270'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='n270-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='phenom'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnow'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnowext'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='phenom-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnow'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnowext'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </mode>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  </cpu>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <memoryBacking supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <enum name='sourceType'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <value>file</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <value>anonymous</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <value>memfd</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  </memoryBacking>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <devices>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <disk supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='diskDevice'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>disk</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>cdrom</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>floppy</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>lun</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='bus'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>fdc</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>scsi</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>usb</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>sata</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='model'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio-transitional</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio-non-transitional</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </disk>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <graphics supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='type'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>vnc</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>egl-headless</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>dbus</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </graphics>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <video supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='modelType'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>vga</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>cirrus</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>none</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>bochs</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>ramfb</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </video>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <hostdev supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='mode'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>subsystem</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='startupPolicy'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>default</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>mandatory</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>requisite</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>optional</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='subsysType'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>usb</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>pci</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>scsi</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='capsType'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='pciBackend'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </hostdev>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <rng supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='model'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio-transitional</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio-non-transitional</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='backendModel'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>random</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>egd</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>builtin</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </rng>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <filesystem supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='driverType'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>path</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>handle</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtiofs</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </filesystem>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <tpm supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='model'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>tpm-tis</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>tpm-crb</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='backendModel'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>emulator</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>external</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='backendVersion'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>2.0</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </tpm>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <redirdev supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='bus'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>usb</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </redirdev>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <channel supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='type'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>pty</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>unix</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </channel>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <crypto supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='model'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='type'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>qemu</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='backendModel'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>builtin</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </crypto>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <interface supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='backendType'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>default</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>passt</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </interface>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <panic supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='model'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>isa</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>hyperv</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </panic>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <console supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='type'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>null</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>vc</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>pty</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>dev</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>file</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>pipe</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>stdio</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>udp</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>tcp</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>unix</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>qemu-vdagent</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>dbus</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </console>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  </devices>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <features>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <gic supported='no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <vmcoreinfo supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <genid supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <backingStoreInput supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <backup supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <async-teardown supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <ps2 supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <sev supported='no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <sgx supported='no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <hyperv supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='features'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>relaxed</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>vapic</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>spinlocks</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>vpindex</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>runtime</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>synic</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>stimer</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>reset</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>vendor_id</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>frequencies</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>reenlightenment</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>tlbflush</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>ipi</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>avic</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>emsr_bitmap</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>xmm_input</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <defaults>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <spinlocks>4095</spinlocks>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <stimer_direct>on</stimer_direct>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <tlbflush_direct>on</tlbflush_direct>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <tlbflush_extended>on</tlbflush_extended>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </defaults>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </hyperv>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <launchSecurity supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='sectype'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>tdx</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </launchSecurity>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  </features>
Dec 13 02:29:20 np0005558317 nova_compute[241222]: </domainCapabilities>
Dec 13 02:29:20 np0005558317 nova_compute[241222]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.225 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 13 02:29:20 np0005558317 nova_compute[241222]: <domainCapabilities>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <path>/usr/libexec/qemu-kvm</path>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <domain>kvm</domain>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <arch>i686</arch>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <vcpu max='240'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <iothreads supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <os supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <enum name='firmware'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <loader supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='type'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>rom</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>pflash</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='readonly'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>yes</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>no</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='secure'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>no</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </loader>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  </os>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <cpu>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <mode name='host-passthrough' supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='hostPassthroughMigratable'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>on</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>off</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </mode>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <mode name='maximum' supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='maximumMigratable'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>on</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>off</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </mode>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <mode name='host-model' supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model fallback='forbid'>EPYC-Milan</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <vendor>AMD</vendor>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <maxphysaddr mode='passthrough' limit='48'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='x2apic'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='tsc-deadline'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='hypervisor'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='tsc_adjust'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='vaes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='vpclmulqdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='spec-ctrl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='stibp'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='ssbd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='cmp_legacy'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='overflow-recov'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='succor'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='virt-ssbd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='lbrv'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='tsc-scale'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='vmcb-clean'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='flushbyasid'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='pause-filter'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='pfthreshold'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='v-vmsave-vmload'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='vgif'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='lfence-always-serializing'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </mode>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <mode name='custom' supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Broadwell'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Broadwell-IBRS'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Broadwell-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Broadwell-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server-noTSX'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server-v4'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server-v5'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cooperlake'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cooperlake-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cooperlake-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Denverton'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mpx'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Denverton-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mpx'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='EPYC-Genoa'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amd-psfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='auto-ibrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='no-nested-data-bp'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='null-sel-clr-base'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='stibp-always-on'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='EPYC-Genoa-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amd-psfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='auto-ibrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='no-nested-data-bp'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='null-sel-clr-base'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='stibp-always-on'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='EPYC-Milan-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amd-psfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='no-nested-data-bp'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='null-sel-clr-base'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='stibp-always-on'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='GraniteRapids'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mcdt-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='pbrsb-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='prefetchiti'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='GraniteRapids-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mcdt-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='pbrsb-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='prefetchiti'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='GraniteRapids-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx10'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx10-128'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx10-256'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx10-512'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mcdt-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='pbrsb-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='prefetchiti'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Haswell'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Haswell-IBRS'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Haswell-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Haswell-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-noTSX'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v4'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v5'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v6'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v7'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='KnightsMill'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-4fmaps'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-4vnniw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512er'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512pf'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='KnightsMill-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-4fmaps'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-4vnniw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512er'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512pf'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Opteron_G4'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fma4'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xop'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Opteron_G4-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fma4'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xop'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Opteron_G5'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fma4'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tbm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xop'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Opteron_G5-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fma4'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tbm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xop'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='SapphireRapids'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='SapphireRapids-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='SapphireRapids-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='SapphireRapids-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='SierraForest'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-ne-convert'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cmpccxadd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mcdt-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='pbrsb-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='SierraForest-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-ne-convert'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cmpccxadd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mcdt-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='pbrsb-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Client'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Client-IBRS'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Client-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Client-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-IBRS'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-v4'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-v5'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Snowridge'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='core-capability'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mpx'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='split-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Snowridge-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='core-capability'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mpx'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='split-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Snowridge-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='core-capability'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='split-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Snowridge-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='core-capability'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='split-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Snowridge-v4'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='athlon'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnow'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnowext'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='athlon-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnow'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnowext'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='core2duo'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='core2duo-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='coreduo'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='coreduo-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='n270'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='n270-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='phenom'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnow'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnowext'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='phenom-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnow'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnowext'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </mode>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  </cpu>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <memoryBacking supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <enum name='sourceType'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <value>file</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <value>anonymous</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <value>memfd</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  </memoryBacking>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <devices>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <disk supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='diskDevice'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>disk</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>cdrom</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>floppy</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>lun</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='bus'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>ide</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>fdc</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>scsi</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>usb</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>sata</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='model'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio-transitional</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio-non-transitional</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </disk>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <graphics supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='type'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>vnc</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>egl-headless</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>dbus</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </graphics>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <video supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='modelType'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>vga</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>cirrus</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>none</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>bochs</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>ramfb</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </video>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <hostdev supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='mode'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>subsystem</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='startupPolicy'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>default</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>mandatory</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>requisite</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>optional</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='subsysType'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>usb</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>pci</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>scsi</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='capsType'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='pciBackend'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </hostdev>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <rng supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='model'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio-transitional</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio-non-transitional</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='backendModel'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>random</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>egd</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>builtin</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </rng>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <filesystem supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='driverType'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>path</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>handle</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtiofs</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </filesystem>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <tpm supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='model'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>tpm-tis</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>tpm-crb</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='backendModel'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>emulator</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>external</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='backendVersion'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>2.0</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </tpm>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <redirdev supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='bus'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>usb</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </redirdev>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <channel supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='type'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>pty</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>unix</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </channel>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <crypto supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='model'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='type'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>qemu</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='backendModel'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>builtin</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </crypto>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <interface supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='backendType'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>default</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>passt</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </interface>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <panic supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='model'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>isa</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>hyperv</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </panic>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <console supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='type'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>null</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>vc</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>pty</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>dev</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>file</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>pipe</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>stdio</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>udp</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>tcp</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>unix</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>qemu-vdagent</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>dbus</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </console>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  </devices>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <features>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <gic supported='no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <vmcoreinfo supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <genid supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <backingStoreInput supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <backup supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <async-teardown supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <ps2 supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <sev supported='no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <sgx supported='no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <hyperv supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='features'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>relaxed</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>vapic</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>spinlocks</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>vpindex</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>runtime</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>synic</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>stimer</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>reset</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>vendor_id</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>frequencies</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>reenlightenment</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>tlbflush</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>ipi</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>avic</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>emsr_bitmap</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>xmm_input</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <defaults>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <spinlocks>4095</spinlocks>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <stimer_direct>on</stimer_direct>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <tlbflush_direct>on</tlbflush_direct>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <tlbflush_extended>on</tlbflush_extended>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </defaults>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </hyperv>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <launchSecurity supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='sectype'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>tdx</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </launchSecurity>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  </features>
Dec 13 02:29:20 np0005558317 nova_compute[241222]: </domainCapabilities>
Dec 13 02:29:20 np0005558317 nova_compute[241222]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.226 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.228 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 13 02:29:20 np0005558317 nova_compute[241222]: <domainCapabilities>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <path>/usr/libexec/qemu-kvm</path>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <domain>kvm</domain>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <machine>pc-q35-rhel9.8.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <arch>x86_64</arch>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <vcpu max='4096'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <iothreads supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <os supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <enum name='firmware'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <value>efi</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <loader supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='type'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>rom</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>pflash</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='readonly'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>yes</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>no</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='secure'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>yes</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>no</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </loader>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  </os>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <cpu>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <mode name='host-passthrough' supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='hostPassthroughMigratable'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>on</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>off</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </mode>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <mode name='maximum' supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='maximumMigratable'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>on</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>off</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </mode>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <mode name='host-model' supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model fallback='forbid'>EPYC-Milan</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <vendor>AMD</vendor>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <maxphysaddr mode='passthrough' limit='48'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='x2apic'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='tsc-deadline'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='hypervisor'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='tsc_adjust'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='vaes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='vpclmulqdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='spec-ctrl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='stibp'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='ssbd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='cmp_legacy'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='overflow-recov'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='succor'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='virt-ssbd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='lbrv'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='tsc-scale'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='vmcb-clean'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='flushbyasid'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='pause-filter'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='pfthreshold'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='v-vmsave-vmload'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='vgif'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='lfence-always-serializing'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </mode>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <mode name='custom' supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Broadwell'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Broadwell-IBRS'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Broadwell-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Broadwell-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server-noTSX'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server-v4'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server-v5'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cooperlake'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cooperlake-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cooperlake-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Denverton'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mpx'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Denverton-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mpx'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='EPYC-Genoa'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amd-psfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='auto-ibrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='no-nested-data-bp'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='null-sel-clr-base'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='stibp-always-on'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='EPYC-Genoa-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amd-psfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='auto-ibrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='no-nested-data-bp'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='null-sel-clr-base'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='stibp-always-on'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='EPYC-Milan-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amd-psfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='no-nested-data-bp'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='null-sel-clr-base'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='stibp-always-on'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='GraniteRapids'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mcdt-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='pbrsb-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='prefetchiti'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='GraniteRapids-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mcdt-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='pbrsb-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='prefetchiti'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='GraniteRapids-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx10'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx10-128'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx10-256'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx10-512'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mcdt-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='pbrsb-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='prefetchiti'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Haswell'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Haswell-IBRS'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Haswell-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Haswell-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-noTSX'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v4'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v5'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v6'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v7'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='KnightsMill'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-4fmaps'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-4vnniw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512er'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512pf'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='KnightsMill-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-4fmaps'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-4vnniw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512er'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512pf'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Opteron_G4'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fma4'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xop'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Opteron_G4-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fma4'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xop'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Opteron_G5'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fma4'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tbm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xop'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Opteron_G5-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fma4'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tbm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xop'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='SapphireRapids'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='SapphireRapids-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='SapphireRapids-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='SapphireRapids-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='SierraForest'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-ne-convert'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cmpccxadd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mcdt-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='pbrsb-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='SierraForest-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-ne-convert'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cmpccxadd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mcdt-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='pbrsb-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Client'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Client-IBRS'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Client-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Client-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-IBRS'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-v4'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-v5'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Snowridge'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='core-capability'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mpx'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='split-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Snowridge-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='core-capability'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mpx'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='split-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Snowridge-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='core-capability'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='split-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Snowridge-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='core-capability'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='split-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Snowridge-v4'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='athlon'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnow'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnowext'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='athlon-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnow'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnowext'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='core2duo'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='core2duo-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='coreduo'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='coreduo-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='n270'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='n270-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='phenom'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnow'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnowext'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='phenom-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnow'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnowext'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </mode>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  </cpu>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <memoryBacking supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <enum name='sourceType'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <value>file</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <value>anonymous</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <value>memfd</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  </memoryBacking>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <devices>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <disk supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='diskDevice'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>disk</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>cdrom</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>floppy</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>lun</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='bus'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>fdc</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>scsi</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>usb</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>sata</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='model'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio-transitional</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio-non-transitional</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </disk>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <graphics supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='type'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>vnc</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>egl-headless</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>dbus</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </graphics>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <video supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='modelType'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>vga</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>cirrus</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>none</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>bochs</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>ramfb</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </video>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <hostdev supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='mode'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>subsystem</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='startupPolicy'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>default</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>mandatory</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>requisite</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>optional</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='subsysType'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>usb</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>pci</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>scsi</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='capsType'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='pciBackend'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </hostdev>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <rng supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='model'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio-transitional</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio-non-transitional</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='backendModel'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>random</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>egd</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>builtin</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </rng>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <filesystem supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='driverType'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>path</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>handle</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtiofs</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </filesystem>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <tpm supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='model'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>tpm-tis</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>tpm-crb</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='backendModel'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>emulator</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>external</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='backendVersion'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>2.0</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </tpm>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <redirdev supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='bus'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>usb</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </redirdev>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <channel supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='type'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>pty</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>unix</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </channel>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <crypto supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='model'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='type'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>qemu</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='backendModel'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>builtin</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </crypto>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <interface supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='backendType'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>default</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>passt</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </interface>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <panic supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='model'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>isa</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>hyperv</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </panic>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <console supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='type'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>null</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>vc</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>pty</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>dev</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>file</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>pipe</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>stdio</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>udp</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>tcp</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>unix</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>qemu-vdagent</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>dbus</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </console>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  </devices>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <features>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <gic supported='no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <vmcoreinfo supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <genid supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <backingStoreInput supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <backup supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <async-teardown supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <ps2 supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <sev supported='no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <sgx supported='no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <hyperv supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='features'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>relaxed</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>vapic</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>spinlocks</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>vpindex</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>runtime</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>synic</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>stimer</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>reset</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>vendor_id</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>frequencies</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>reenlightenment</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>tlbflush</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>ipi</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>avic</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>emsr_bitmap</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>xmm_input</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <defaults>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <spinlocks>4095</spinlocks>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <stimer_direct>on</stimer_direct>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <tlbflush_direct>on</tlbflush_direct>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <tlbflush_extended>on</tlbflush_extended>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </defaults>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </hyperv>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <launchSecurity supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='sectype'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>tdx</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </launchSecurity>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  </features>
Dec 13 02:29:20 np0005558317 nova_compute[241222]: </domainCapabilities>
Dec 13 02:29:20 np0005558317 nova_compute[241222]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.269 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 13 02:29:20 np0005558317 nova_compute[241222]: <domainCapabilities>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <path>/usr/libexec/qemu-kvm</path>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <domain>kvm</domain>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <arch>x86_64</arch>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <vcpu max='240'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <iothreads supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <os supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <enum name='firmware'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <loader supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='type'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>rom</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>pflash</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='readonly'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>yes</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>no</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='secure'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>no</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </loader>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  </os>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <cpu>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <mode name='host-passthrough' supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='hostPassthroughMigratable'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>on</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>off</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </mode>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <mode name='maximum' supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='maximumMigratable'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>on</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>off</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </mode>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <mode name='host-model' supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model fallback='forbid'>EPYC-Milan</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <vendor>AMD</vendor>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <maxphysaddr mode='passthrough' limit='48'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='x2apic'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='tsc-deadline'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='hypervisor'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='tsc_adjust'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='vaes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='vpclmulqdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='spec-ctrl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='stibp'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='ssbd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='cmp_legacy'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='overflow-recov'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='succor'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='virt-ssbd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='lbrv'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='tsc-scale'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='vmcb-clean'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='flushbyasid'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='pause-filter'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='pfthreshold'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='v-vmsave-vmload'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='vgif'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <feature policy='require' name='lfence-always-serializing'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </mode>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <mode name='custom' supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Broadwell'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Broadwell-IBRS'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Broadwell-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Broadwell-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server-noTSX'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server-v4'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cascadelake-Server-v5'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cooperlake'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cooperlake-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Cooperlake-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Denverton'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mpx'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Denverton-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mpx'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='EPYC-Genoa'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amd-psfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='auto-ibrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='no-nested-data-bp'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='null-sel-clr-base'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='stibp-always-on'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='EPYC-Genoa-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amd-psfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='auto-ibrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='no-nested-data-bp'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='null-sel-clr-base'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='stibp-always-on'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='EPYC-Milan-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amd-psfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='no-nested-data-bp'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='null-sel-clr-base'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='stibp-always-on'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='GraniteRapids'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mcdt-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='pbrsb-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='prefetchiti'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='GraniteRapids-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mcdt-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='pbrsb-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='prefetchiti'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='GraniteRapids-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx10'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx10-128'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx10-256'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx10-512'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mcdt-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='pbrsb-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='prefetchiti'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Haswell'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Haswell-IBRS'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Haswell-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Haswell-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-noTSX'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v4'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v5'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v6'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Icelake-Server-v7'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='KnightsMill'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-4fmaps'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-4vnniw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512er'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512pf'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='KnightsMill-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-4fmaps'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-4vnniw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512er'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512pf'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Opteron_G4'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fma4'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xop'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Opteron_G4-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fma4'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xop'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Opteron_G5'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fma4'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tbm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xop'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Opteron_G5-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fma4'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tbm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xop'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='SapphireRapids'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='SapphireRapids-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='SapphireRapids-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='SapphireRapids-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='amx-tile'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-bf16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-fp16'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bitalg'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vbmi2'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrc'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fzrm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='la57'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='taa-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='tsx-ldtrk'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='xfd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='SierraForest'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-ne-convert'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cmpccxadd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mcdt-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='pbrsb-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='SierraForest-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-ifma'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-ne-convert'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx-vnni-int8'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='bus-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cmpccxadd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fbsdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='fsrs'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ibrs-all'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mcdt-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='pbrsb-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='psdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='serialize'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Client'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Client-IBRS'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Client-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Client-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-IBRS'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='hle'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='rtm'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-v4'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Skylake-Server-v5'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512bw'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512cd'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512dq'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512f'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='avx512vl'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Snowridge'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='core-capability'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mpx'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='split-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Snowridge-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='core-capability'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='mpx'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='split-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Snowridge-v2'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='core-capability'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='split-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Snowridge-v3'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='core-capability'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='split-lock-detect'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='Snowridge-v4'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='cldemote'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='gfni'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdir64b'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='movdiri'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='athlon'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnow'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnowext'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='athlon-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnow'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnowext'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='core2duo'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='core2duo-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='coreduo'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='coreduo-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='n270'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='n270-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='ss'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='phenom'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnow'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnowext'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <blockers model='phenom-v1'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnow'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <feature name='3dnowext'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </blockers>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </mode>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  </cpu>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <memoryBacking supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <enum name='sourceType'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <value>file</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <value>anonymous</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <value>memfd</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  </memoryBacking>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <devices>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <disk supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='diskDevice'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>disk</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>cdrom</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>floppy</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>lun</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='bus'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>ide</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>fdc</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>scsi</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>usb</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>sata</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='model'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio-transitional</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio-non-transitional</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </disk>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <graphics supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='type'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>vnc</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>egl-headless</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>dbus</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </graphics>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <video supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='modelType'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>vga</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>cirrus</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>none</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>bochs</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>ramfb</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </video>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <hostdev supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='mode'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>subsystem</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='startupPolicy'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>default</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>mandatory</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>requisite</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>optional</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='subsysType'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>usb</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>pci</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>scsi</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='capsType'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='pciBackend'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </hostdev>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <rng supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='model'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio-transitional</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtio-non-transitional</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='backendModel'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>random</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>egd</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>builtin</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </rng>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <filesystem supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='driverType'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>path</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>handle</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>virtiofs</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </filesystem>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <tpm supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='model'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>tpm-tis</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>tpm-crb</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='backendModel'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>emulator</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>external</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='backendVersion'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>2.0</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </tpm>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <redirdev supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='bus'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>usb</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </redirdev>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <channel supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='type'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>pty</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>unix</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </channel>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <crypto supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='model'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='type'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>qemu</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='backendModel'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>builtin</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </crypto>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <interface supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='backendType'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>default</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>passt</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </interface>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <panic supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='model'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>isa</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>hyperv</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </panic>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <console supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='type'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>null</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>vc</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>pty</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>dev</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>file</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>pipe</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>stdio</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>udp</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>tcp</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>unix</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>qemu-vdagent</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>dbus</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </console>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  </devices>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  <features>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <gic supported='no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <vmcoreinfo supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <genid supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <backingStoreInput supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <backup supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <async-teardown supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <ps2 supported='yes'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <sev supported='no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <sgx supported='no'/>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <hyperv supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='features'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>relaxed</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>vapic</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>spinlocks</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>vpindex</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>runtime</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>synic</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>stimer</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>reset</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>vendor_id</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>frequencies</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>reenlightenment</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>tlbflush</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>ipi</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>avic</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>emsr_bitmap</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>xmm_input</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <defaults>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <spinlocks>4095</spinlocks>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <stimer_direct>on</stimer_direct>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <tlbflush_direct>on</tlbflush_direct>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <tlbflush_extended>on</tlbflush_extended>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </defaults>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </hyperv>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    <launchSecurity supported='yes'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      <enum name='sectype'>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:        <value>tdx</value>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:      </enum>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:    </launchSecurity>
Dec 13 02:29:20 np0005558317 nova_compute[241222]:  </features>
Dec 13 02:29:20 np0005558317 nova_compute[241222]: </domainCapabilities>
Dec 13 02:29:20 np0005558317 nova_compute[241222]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.312 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.312 241226 INFO nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Secure Boot support detected#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.313 241226 INFO nova.virt.libvirt.driver [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.313 241226 INFO nova.virt.libvirt.driver [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.320 241226 DEBUG nova.virt.libvirt.driver [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.345 241226 INFO nova.virt.node [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Determined node identity 1d614cf3-e40f-4742-a628-7a61041be9be from /var/lib/nova/compute_id#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.357 241226 WARNING nova.compute.manager [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Compute nodes ['1d614cf3-e40f-4742-a628-7a61041be9be'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.377 241226 INFO nova.compute.manager [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.397 241226 WARNING nova.compute.manager [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.397 241226 DEBUG oslo_concurrency.lockutils [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.397 241226 DEBUG oslo_concurrency.lockutils [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.397 241226 DEBUG oslo_concurrency.lockutils [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.397 241226 DEBUG nova.compute.resource_tracker [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.397 241226 DEBUG oslo_concurrency.processutils [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 02:29:20 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:20 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 02:29:20 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3001829381' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 02:29:20 np0005558317 nova_compute[241222]: 2025-12-13 07:29:20.805 241226 DEBUG oslo_concurrency.processutils [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 02:29:21 np0005558317 nova_compute[241222]: 2025-12-13 07:29:21.013 241226 WARNING nova.virt.libvirt.driver [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 02:29:21 np0005558317 nova_compute[241222]: 2025-12-13 07:29:21.014 241226 DEBUG nova.compute.resource_tracker [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5134MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 02:29:21 np0005558317 nova_compute[241222]: 2025-12-13 07:29:21.014 241226 DEBUG oslo_concurrency.lockutils [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:29:21 np0005558317 nova_compute[241222]: 2025-12-13 07:29:21.014 241226 DEBUG oslo_concurrency.lockutils [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:29:21 np0005558317 nova_compute[241222]: 2025-12-13 07:29:21.030 241226 WARNING nova.compute.resource_tracker [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] No compute node record for compute-0.ctlplane.example.com:1d614cf3-e40f-4742-a628-7a61041be9be: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 1d614cf3-e40f-4742-a628-7a61041be9be could not be found.#033[00m
Dec 13 02:29:21 np0005558317 nova_compute[241222]: 2025-12-13 07:29:21.040 241226 INFO nova.compute.resource_tracker [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 1d614cf3-e40f-4742-a628-7a61041be9be#033[00m
Dec 13 02:29:21 np0005558317 nova_compute[241222]: 2025-12-13 07:29:21.082 241226 DEBUG nova.compute.resource_tracker [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 02:29:21 np0005558317 nova_compute[241222]: 2025-12-13 07:29:21.082 241226 DEBUG nova.compute.resource_tracker [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 02:29:21 np0005558317 nova_compute[241222]: 2025-12-13 07:29:21.808 241226 INFO nova.scheduler.client.report [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] [req-24f5b894-ad56-4dff-89e5-b13eb3b342df] Created resource provider record via placement API for resource provider with UUID 1d614cf3-e40f-4742-a628-7a61041be9be and name compute-0.ctlplane.example.com.#033[00m
Dec 13 02:29:22 np0005558317 nova_compute[241222]: 2025-12-13 07:29:22.146 241226 DEBUG oslo_concurrency.processutils [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 02:29:22 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 02:29:22 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/364365511' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 02:29:22 np0005558317 nova_compute[241222]: 2025-12-13 07:29:22.557 241226 DEBUG oslo_concurrency.processutils [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 02:29:22 np0005558317 nova_compute[241222]: 2025-12-13 07:29:22.561 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec 13 02:29:22 np0005558317 nova_compute[241222]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Dec 13 02:29:22 np0005558317 nova_compute[241222]: 2025-12-13 07:29:22.561 241226 INFO nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] kernel doesn't support AMD SEV#033[00m
Dec 13 02:29:22 np0005558317 nova_compute[241222]: 2025-12-13 07:29:22.562 241226 DEBUG nova.compute.provider_tree [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Updating inventory in ProviderTree for provider 1d614cf3-e40f-4742-a628-7a61041be9be with inventory: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 02:29:22 np0005558317 nova_compute[241222]: 2025-12-13 07:29:22.562 241226 DEBUG nova.virt.libvirt.driver [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 02:29:22 np0005558317 nova_compute[241222]: 2025-12-13 07:29:22.596 241226 DEBUG nova.scheduler.client.report [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Updated inventory for provider 1d614cf3-e40f-4742-a628-7a61041be9be with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec 13 02:29:22 np0005558317 nova_compute[241222]: 2025-12-13 07:29:22.596 241226 DEBUG nova.compute.provider_tree [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Updating resource provider 1d614cf3-e40f-4742-a628-7a61041be9be generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec 13 02:29:22 np0005558317 nova_compute[241222]: 2025-12-13 07:29:22.597 241226 DEBUG nova.compute.provider_tree [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Updating inventory in ProviderTree for provider 1d614cf3-e40f-4742-a628-7a61041be9be with inventory: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 02:29:22 np0005558317 nova_compute[241222]: 2025-12-13 07:29:22.661 241226 DEBUG nova.compute.provider_tree [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Updating resource provider 1d614cf3-e40f-4742-a628-7a61041be9be generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec 13 02:29:22 np0005558317 nova_compute[241222]: 2025-12-13 07:29:22.677 241226 DEBUG nova.compute.resource_tracker [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 02:29:22 np0005558317 nova_compute[241222]: 2025-12-13 07:29:22.677 241226 DEBUG oslo_concurrency.lockutils [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:29:22 np0005558317 nova_compute[241222]: 2025-12-13 07:29:22.677 241226 DEBUG nova.service [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Dec 13 02:29:22 np0005558317 nova_compute[241222]: 2025-12-13 07:29:22.720 241226 DEBUG nova.service [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Dec 13 02:29:22 np0005558317 nova_compute[241222]: 2025-12-13 07:29:22.720 241226 DEBUG nova.servicegroup.drivers.db [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Dec 13 02:29:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:29:24 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:24 np0005558317 podman[241549]: 2025-12-13 07:29:24.721986608 +0000 UTC m=+0.062068246 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 02:29:26 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:29:28 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:30 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:32 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:29:34 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:36 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:36 np0005558317 podman[241572]: 2025-12-13 07:29:36.695918078 +0000 UTC m=+0.037998323 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251202)
Dec 13 02:29:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 02:29:37 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2722253597' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 02:29:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 02:29:37 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2722253597' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 02:29:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 02:29:37 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2332556817' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 02:29:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 02:29:37 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2332556817' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 02:29:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 02:29:37 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3655560024' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 02:29:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 02:29:37 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3655560024' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 02:29:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:29:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:29:38
Dec 13 02:29:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:29:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 02:29:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'vms']
Dec 13 02:29:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:29:38 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:29:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:29:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:29:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:29:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:29:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:29:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:29:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:29:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:29:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:29:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:29:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:29:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:29:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:29:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:29:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:29:40 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:29:41.638 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:29:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:29:41.638 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:29:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:29:41.639 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:29:42 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:29:44 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:45 np0005558317 podman[241589]: 2025-12-13 07:29:45.685656285 +0000 UTC m=+0.031276081 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 02:29:46 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:29:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:29:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:29:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:29:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:29:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:29:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:29:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:29:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:29:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:29:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:29:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:29:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:29:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 02:29:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:29:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:29:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:29:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:29:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:29:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:29:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:29:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:29:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:29:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:29:48 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:50 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:52 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:29:54 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:55 np0005558317 podman[241606]: 2025-12-13 07:29:55.716271323 +0000 UTC m=+0.057375149 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Dec 13 02:29:56 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:29:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:29:58 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:30:00 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:30:01 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Dec 13 02:30:01 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/344088898' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec 13 02:30:01 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14316 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 13 02:30:01 np0005558317 ceph-mgr[75200]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 13 02:30:01 np0005558317 ceph-mgr[75200]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 13 02:30:02 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:30:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:30:04 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:30:06 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:30:07 np0005558317 podman[241629]: 2025-12-13 07:30:07.71693384 +0000 UTC m=+0.051264779 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 02:30:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:30:08 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 10 op/s
Dec 13 02:30:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:30:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:30:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:30:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:30:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:30:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:30:09 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:30:09 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:30:09 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:30:09 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:30:09 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:30:09 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:30:09 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:30:09 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:30:09 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:30:09 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:30:09 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:30:09 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:30:09 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:30:09 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:30:09 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:30:09 np0005558317 podman[241785]: 2025-12-13 07:30:09.633159706 +0000 UTC m=+0.030059296 container create a0d4b7ca48b81fbf7912b56b53a4f8610a1ae222de60a88e12443676fe1cb8b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mcclintock, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 02:30:09 np0005558317 systemd[1]: Started libpod-conmon-a0d4b7ca48b81fbf7912b56b53a4f8610a1ae222de60a88e12443676fe1cb8b6.scope.
Dec 13 02:30:09 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:30:09 np0005558317 podman[241785]: 2025-12-13 07:30:09.692671251 +0000 UTC m=+0.089570851 container init a0d4b7ca48b81fbf7912b56b53a4f8610a1ae222de60a88e12443676fe1cb8b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mcclintock, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 02:30:09 np0005558317 podman[241785]: 2025-12-13 07:30:09.697235613 +0000 UTC m=+0.094135203 container start a0d4b7ca48b81fbf7912b56b53a4f8610a1ae222de60a88e12443676fe1cb8b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mcclintock, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 02:30:09 np0005558317 podman[241785]: 2025-12-13 07:30:09.699200398 +0000 UTC m=+0.096099988 container attach a0d4b7ca48b81fbf7912b56b53a4f8610a1ae222de60a88e12443676fe1cb8b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:30:09 np0005558317 affectionate_mcclintock[241798]: 167 167
Dec 13 02:30:09 np0005558317 systemd[1]: libpod-a0d4b7ca48b81fbf7912b56b53a4f8610a1ae222de60a88e12443676fe1cb8b6.scope: Deactivated successfully.
Dec 13 02:30:09 np0005558317 podman[241785]: 2025-12-13 07:30:09.701301559 +0000 UTC m=+0.098201149 container died a0d4b7ca48b81fbf7912b56b53a4f8610a1ae222de60a88e12443676fe1cb8b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Dec 13 02:30:09 np0005558317 systemd[1]: var-lib-containers-storage-overlay-a6ed6aa34c9f31df2d09b0bc746a9ddc5b01d7c69a416250d08baf1f3e0f03bb-merged.mount: Deactivated successfully.
Dec 13 02:30:09 np0005558317 podman[241785]: 2025-12-13 07:30:09.622745152 +0000 UTC m=+0.019644762 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:30:09 np0005558317 podman[241785]: 2025-12-13 07:30:09.721560524 +0000 UTC m=+0.118460113 container remove a0d4b7ca48b81fbf7912b56b53a4f8610a1ae222de60a88e12443676fe1cb8b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mcclintock, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 02:30:09 np0005558317 systemd[1]: libpod-conmon-a0d4b7ca48b81fbf7912b56b53a4f8610a1ae222de60a88e12443676fe1cb8b6.scope: Deactivated successfully.
Dec 13 02:30:09 np0005558317 podman[241819]: 2025-12-13 07:30:09.842784984 +0000 UTC m=+0.028489914 container create 51c6ed31266dfc2fa795a7aef0bcabac4fa16918fa6377fd9c0c19a56a580688 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_turing, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:30:09 np0005558317 systemd[1]: Started libpod-conmon-51c6ed31266dfc2fa795a7aef0bcabac4fa16918fa6377fd9c0c19a56a580688.scope.
Dec 13 02:30:09 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:30:09 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2ec719fb0fe5fda83b0d78207509324dde0821cecc9c4c29d7e601d9aa6a4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:09 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2ec719fb0fe5fda83b0d78207509324dde0821cecc9c4c29d7e601d9aa6a4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:09 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2ec719fb0fe5fda83b0d78207509324dde0821cecc9c4c29d7e601d9aa6a4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:09 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2ec719fb0fe5fda83b0d78207509324dde0821cecc9c4c29d7e601d9aa6a4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:09 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2ec719fb0fe5fda83b0d78207509324dde0821cecc9c4c29d7e601d9aa6a4d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:09 np0005558317 podman[241819]: 2025-12-13 07:30:09.905948406 +0000 UTC m=+0.091653366 container init 51c6ed31266dfc2fa795a7aef0bcabac4fa16918fa6377fd9c0c19a56a580688 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:30:09 np0005558317 podman[241819]: 2025-12-13 07:30:09.915004856 +0000 UTC m=+0.100709787 container start 51c6ed31266dfc2fa795a7aef0bcabac4fa16918fa6377fd9c0c19a56a580688 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_turing, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 02:30:09 np0005558317 podman[241819]: 2025-12-13 07:30:09.91686296 +0000 UTC m=+0.102567911 container attach 51c6ed31266dfc2fa795a7aef0bcabac4fa16918fa6377fd9c0c19a56a580688 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_turing, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:30:09 np0005558317 podman[241819]: 2025-12-13 07:30:09.83082816 +0000 UTC m=+0.016533090 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:30:10 np0005558317 angry_turing[241832]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:30:10 np0005558317 angry_turing[241832]: --> All data devices are unavailable
Dec 13 02:30:10 np0005558317 systemd[1]: libpod-51c6ed31266dfc2fa795a7aef0bcabac4fa16918fa6377fd9c0c19a56a580688.scope: Deactivated successfully.
Dec 13 02:30:10 np0005558317 podman[241819]: 2025-12-13 07:30:10.279537972 +0000 UTC m=+0.465242903 container died 51c6ed31266dfc2fa795a7aef0bcabac4fa16918fa6377fd9c0c19a56a580688 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 02:30:10 np0005558317 systemd[1]: var-lib-containers-storage-overlay-ae2ec719fb0fe5fda83b0d78207509324dde0821cecc9c4c29d7e601d9aa6a4d-merged.mount: Deactivated successfully.
Dec 13 02:30:10 np0005558317 podman[241819]: 2025-12-13 07:30:10.304271271 +0000 UTC m=+0.489976201 container remove 51c6ed31266dfc2fa795a7aef0bcabac4fa16918fa6377fd9c0c19a56a580688 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 02:30:10 np0005558317 systemd[1]: libpod-conmon-51c6ed31266dfc2fa795a7aef0bcabac4fa16918fa6377fd9c0c19a56a580688.scope: Deactivated successfully.
Dec 13 02:30:10 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 02:30:10 np0005558317 podman[241921]: 2025-12-13 07:30:10.638640498 +0000 UTC m=+0.027417547 container create 3530185b6d725fab07c95f88318315004981a8d9fa9e242f5d4230fb88b785da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_pike, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:30:10 np0005558317 systemd[1]: Started libpod-conmon-3530185b6d725fab07c95f88318315004981a8d9fa9e242f5d4230fb88b785da.scope.
Dec 13 02:30:10 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:30:10 np0005558317 podman[241921]: 2025-12-13 07:30:10.68932781 +0000 UTC m=+0.078104859 container init 3530185b6d725fab07c95f88318315004981a8d9fa9e242f5d4230fb88b785da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_pike, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 02:30:10 np0005558317 podman[241921]: 2025-12-13 07:30:10.693647804 +0000 UTC m=+0.082424853 container start 3530185b6d725fab07c95f88318315004981a8d9fa9e242f5d4230fb88b785da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 02:30:10 np0005558317 podman[241921]: 2025-12-13 07:30:10.696355415 +0000 UTC m=+0.085132484 container attach 3530185b6d725fab07c95f88318315004981a8d9fa9e242f5d4230fb88b785da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:30:10 np0005558317 relaxed_pike[241934]: 167 167
Dec 13 02:30:10 np0005558317 systemd[1]: libpod-3530185b6d725fab07c95f88318315004981a8d9fa9e242f5d4230fb88b785da.scope: Deactivated successfully.
Dec 13 02:30:10 np0005558317 podman[241921]: 2025-12-13 07:30:10.69738427 +0000 UTC m=+0.086161329 container died 3530185b6d725fab07c95f88318315004981a8d9fa9e242f5d4230fb88b785da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:30:10 np0005558317 systemd[1]: var-lib-containers-storage-overlay-3305bb84b43bc11149e1e3794236eb39f02fdc1eadad8d5e2481ce3aa03c13db-merged.mount: Deactivated successfully.
Dec 13 02:30:10 np0005558317 podman[241921]: 2025-12-13 07:30:10.713722512 +0000 UTC m=+0.102499561 container remove 3530185b6d725fab07c95f88318315004981a8d9fa9e242f5d4230fb88b785da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_pike, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 02:30:10 np0005558317 podman[241921]: 2025-12-13 07:30:10.627901394 +0000 UTC m=+0.016678464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:30:10 np0005558317 systemd[1]: libpod-conmon-3530185b6d725fab07c95f88318315004981a8d9fa9e242f5d4230fb88b785da.scope: Deactivated successfully.
Dec 13 02:30:10 np0005558317 podman[241956]: 2025-12-13 07:30:10.833066376 +0000 UTC m=+0.029229024 container create 1530190189d19b9e9331d44dcd2939a6bbfbcc4064c6c192bc661bc66d8eaf7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 02:30:10 np0005558317 systemd[1]: Started libpod-conmon-1530190189d19b9e9331d44dcd2939a6bbfbcc4064c6c192bc661bc66d8eaf7a.scope.
Dec 13 02:30:10 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:30:10 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e29ce31f1b062a0724032a9521600a449645ee3a3f1cfb48fc11a1a6cdf0d457/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:10 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e29ce31f1b062a0724032a9521600a449645ee3a3f1cfb48fc11a1a6cdf0d457/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:10 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e29ce31f1b062a0724032a9521600a449645ee3a3f1cfb48fc11a1a6cdf0d457/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:10 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e29ce31f1b062a0724032a9521600a449645ee3a3f1cfb48fc11a1a6cdf0d457/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:10 np0005558317 podman[241956]: 2025-12-13 07:30:10.885492968 +0000 UTC m=+0.081655627 container init 1530190189d19b9e9331d44dcd2939a6bbfbcc4064c6c192bc661bc66d8eaf7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ritchie, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:30:10 np0005558317 podman[241956]: 2025-12-13 07:30:10.890879748 +0000 UTC m=+0.087042395 container start 1530190189d19b9e9331d44dcd2939a6bbfbcc4064c6c192bc661bc66d8eaf7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 02:30:10 np0005558317 podman[241956]: 2025-12-13 07:30:10.89246063 +0000 UTC m=+0.088623299 container attach 1530190189d19b9e9331d44dcd2939a6bbfbcc4064c6c192bc661bc66d8eaf7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ritchie, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:30:10 np0005558317 podman[241956]: 2025-12-13 07:30:10.820980139 +0000 UTC m=+0.017142787 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]: {
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:    "0": [
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:        {
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "devices": [
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "/dev/loop3"
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            ],
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "lv_name": "ceph_lv0",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "lv_size": "21470642176",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "name": "ceph_lv0",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "tags": {
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.cluster_name": "ceph",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.crush_device_class": "",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.encrypted": "0",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.objectstore": "bluestore",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.osd_id": "0",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.type": "block",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.vdo": "0",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.with_tpm": "0"
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            },
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "type": "block",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "vg_name": "ceph_vg0"
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:        }
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:    ],
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:    "1": [
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:        {
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "devices": [
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "/dev/loop4"
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            ],
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "lv_name": "ceph_lv1",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "lv_size": "21470642176",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "name": "ceph_lv1",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "tags": {
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.cluster_name": "ceph",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.crush_device_class": "",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.encrypted": "0",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.objectstore": "bluestore",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.osd_id": "1",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.type": "block",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.vdo": "0",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.with_tpm": "0"
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            },
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "type": "block",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "vg_name": "ceph_vg1"
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:        }
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:    ],
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:    "2": [
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:        {
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "devices": [
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "/dev/loop5"
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            ],
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "lv_name": "ceph_lv2",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "lv_size": "21470642176",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "name": "ceph_lv2",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "tags": {
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.cluster_name": "ceph",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.crush_device_class": "",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.encrypted": "0",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.objectstore": "bluestore",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.osd_id": "2",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.type": "block",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.vdo": "0",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:                "ceph.with_tpm": "0"
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            },
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "type": "block",
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:            "vg_name": "ceph_vg2"
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:        }
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]:    ]
Dec 13 02:30:11 np0005558317 inspiring_ritchie[241969]: }
Dec 13 02:30:11 np0005558317 systemd[1]: libpod-1530190189d19b9e9331d44dcd2939a6bbfbcc4064c6c192bc661bc66d8eaf7a.scope: Deactivated successfully.
Dec 13 02:30:11 np0005558317 podman[241956]: 2025-12-13 07:30:11.132341962 +0000 UTC m=+0.328504600 container died 1530190189d19b9e9331d44dcd2939a6bbfbcc4064c6c192bc661bc66d8eaf7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 02:30:11 np0005558317 systemd[1]: var-lib-containers-storage-overlay-e29ce31f1b062a0724032a9521600a449645ee3a3f1cfb48fc11a1a6cdf0d457-merged.mount: Deactivated successfully.
Dec 13 02:30:11 np0005558317 podman[241956]: 2025-12-13 07:30:11.157545535 +0000 UTC m=+0.353708184 container remove 1530190189d19b9e9331d44dcd2939a6bbfbcc4064c6c192bc661bc66d8eaf7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ritchie, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 02:30:11 np0005558317 systemd[1]: libpod-conmon-1530190189d19b9e9331d44dcd2939a6bbfbcc4064c6c192bc661bc66d8eaf7a.scope: Deactivated successfully.
Dec 13 02:30:11 np0005558317 podman[242049]: 2025-12-13 07:30:11.494229615 +0000 UTC m=+0.026963974 container create 01de4ee1c5c1106617ac36765fbf1f747b57ebddb3d7e131d4a2334dbbac96e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 02:30:11 np0005558317 systemd[1]: Started libpod-conmon-01de4ee1c5c1106617ac36765fbf1f747b57ebddb3d7e131d4a2334dbbac96e6.scope.
Dec 13 02:30:11 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:30:11 np0005558317 podman[242049]: 2025-12-13 07:30:11.551513893 +0000 UTC m=+0.084248262 container init 01de4ee1c5c1106617ac36765fbf1f747b57ebddb3d7e131d4a2334dbbac96e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:30:11 np0005558317 podman[242049]: 2025-12-13 07:30:11.557954082 +0000 UTC m=+0.090688432 container start 01de4ee1c5c1106617ac36765fbf1f747b57ebddb3d7e131d4a2334dbbac96e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 02:30:11 np0005558317 podman[242049]: 2025-12-13 07:30:11.559217217 +0000 UTC m=+0.091951567 container attach 01de4ee1c5c1106617ac36765fbf1f747b57ebddb3d7e131d4a2334dbbac96e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 02:30:11 np0005558317 sleepy_rhodes[242062]: 167 167
Dec 13 02:30:11 np0005558317 systemd[1]: libpod-01de4ee1c5c1106617ac36765fbf1f747b57ebddb3d7e131d4a2334dbbac96e6.scope: Deactivated successfully.
Dec 13 02:30:11 np0005558317 conmon[242062]: conmon 01de4ee1c5c1106617ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-01de4ee1c5c1106617ac36765fbf1f747b57ebddb3d7e131d4a2334dbbac96e6.scope/container/memory.events
Dec 13 02:30:11 np0005558317 podman[242049]: 2025-12-13 07:30:11.562008847 +0000 UTC m=+0.094743196 container died 01de4ee1c5c1106617ac36765fbf1f747b57ebddb3d7e131d4a2334dbbac96e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:30:11 np0005558317 podman[242049]: 2025-12-13 07:30:11.578715262 +0000 UTC m=+0.111449611 container remove 01de4ee1c5c1106617ac36765fbf1f747b57ebddb3d7e131d4a2334dbbac96e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:30:11 np0005558317 podman[242049]: 2025-12-13 07:30:11.483577284 +0000 UTC m=+0.016311654 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:30:11 np0005558317 systemd[1]: libpod-conmon-01de4ee1c5c1106617ac36765fbf1f747b57ebddb3d7e131d4a2334dbbac96e6.scope: Deactivated successfully.
Dec 13 02:30:11 np0005558317 systemd[1]: var-lib-containers-storage-overlay-0d982fdeea4753387e880d4e9518c86bf4d2a37bccf3b49bd78866ce05646d82-merged.mount: Deactivated successfully.
Dec 13 02:30:11 np0005558317 podman[242084]: 2025-12-13 07:30:11.701989236 +0000 UTC m=+0.028741978 container create a815f79513e526e099b42018146626b811725e5a9bd97c987281a6a873ce704d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_brattain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 02:30:11 np0005558317 systemd[1]: Started libpod-conmon-a815f79513e526e099b42018146626b811725e5a9bd97c987281a6a873ce704d.scope.
Dec 13 02:30:11 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:30:11 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a85522dfc814a6a0d218e3901948a6458026550e91b6a01ee720c53e4b3ef6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:11 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a85522dfc814a6a0d218e3901948a6458026550e91b6a01ee720c53e4b3ef6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:11 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a85522dfc814a6a0d218e3901948a6458026550e91b6a01ee720c53e4b3ef6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:11 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a85522dfc814a6a0d218e3901948a6458026550e91b6a01ee720c53e4b3ef6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:11 np0005558317 podman[242084]: 2025-12-13 07:30:11.755597991 +0000 UTC m=+0.082350724 container init a815f79513e526e099b42018146626b811725e5a9bd97c987281a6a873ce704d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 02:30:11 np0005558317 podman[242084]: 2025-12-13 07:30:11.760636054 +0000 UTC m=+0.087388787 container start a815f79513e526e099b42018146626b811725e5a9bd97c987281a6a873ce704d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:30:11 np0005558317 podman[242084]: 2025-12-13 07:30:11.762022582 +0000 UTC m=+0.088775324 container attach a815f79513e526e099b42018146626b811725e5a9bd97c987281a6a873ce704d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 02:30:11 np0005558317 podman[242084]: 2025-12-13 07:30:11.690363905 +0000 UTC m=+0.017116657 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:30:12 np0005558317 lvm[242175]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:30:12 np0005558317 lvm[242174]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:30:12 np0005558317 lvm[242174]: VG ceph_vg0 finished
Dec 13 02:30:12 np0005558317 lvm[242175]: VG ceph_vg1 finished
Dec 13 02:30:12 np0005558317 lvm[242178]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:30:12 np0005558317 lvm[242178]: VG ceph_vg2 finished
Dec 13 02:30:12 np0005558317 recursing_brattain[242097]: {}
Dec 13 02:30:12 np0005558317 systemd[1]: libpod-a815f79513e526e099b42018146626b811725e5a9bd97c987281a6a873ce704d.scope: Deactivated successfully.
Dec 13 02:30:12 np0005558317 podman[242084]: 2025-12-13 07:30:12.374469406 +0000 UTC m=+0.701222168 container died a815f79513e526e099b42018146626b811725e5a9bd97c987281a6a873ce704d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_brattain, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 02:30:12 np0005558317 systemd[1]: var-lib-containers-storage-overlay-2a85522dfc814a6a0d218e3901948a6458026550e91b6a01ee720c53e4b3ef6e-merged.mount: Deactivated successfully.
Dec 13 02:30:12 np0005558317 podman[242084]: 2025-12-13 07:30:12.399715017 +0000 UTC m=+0.726467749 container remove a815f79513e526e099b42018146626b811725e5a9bd97c987281a6a873ce704d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_brattain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 02:30:12 np0005558317 systemd[1]: libpod-conmon-a815f79513e526e099b42018146626b811725e5a9bd97c987281a6a873ce704d.scope: Deactivated successfully.
Dec 13 02:30:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:30:12 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:30:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:30:12 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:30:12 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 02:30:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:30:13 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:30:13 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:30:14 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 02:30:16 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 02:30:16 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Dec 13 02:30:16 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4262892468' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec 13 02:30:16 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14334 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 13 02:30:16 np0005558317 ceph-mgr[75200]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 13 02:30:16 np0005558317 ceph-mgr[75200]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 13 02:30:16 np0005558317 podman[242214]: 2025-12-13 07:30:16.698313016 +0000 UTC m=+0.040935736 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 13 02:30:17 np0005558317 nova_compute[241222]: 2025-12-13 07:30:17.721 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:30:17 np0005558317 nova_compute[241222]: 2025-12-13 07:30:17.737 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:30:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:30:18 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 02:30:19 np0005558317 nova_compute[241222]: 2025-12-13 07:30:19.569 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:30:19 np0005558317 nova_compute[241222]: 2025-12-13 07:30:19.570 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:30:19 np0005558317 nova_compute[241222]: 2025-12-13 07:30:19.570 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 02:30:19 np0005558317 nova_compute[241222]: 2025-12-13 07:30:19.570 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 02:30:19 np0005558317 nova_compute[241222]: 2025-12-13 07:30:19.579 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 02:30:19 np0005558317 nova_compute[241222]: 2025-12-13 07:30:19.579 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:30:19 np0005558317 nova_compute[241222]: 2025-12-13 07:30:19.579 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:30:19 np0005558317 nova_compute[241222]: 2025-12-13 07:30:19.580 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:30:19 np0005558317 nova_compute[241222]: 2025-12-13 07:30:19.580 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:30:19 np0005558317 nova_compute[241222]: 2025-12-13 07:30:19.580 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:30:19 np0005558317 nova_compute[241222]: 2025-12-13 07:30:19.580 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:30:19 np0005558317 nova_compute[241222]: 2025-12-13 07:30:19.580 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 02:30:19 np0005558317 nova_compute[241222]: 2025-12-13 07:30:19.581 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:30:19 np0005558317 nova_compute[241222]: 2025-12-13 07:30:19.596 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:30:19 np0005558317 nova_compute[241222]: 2025-12-13 07:30:19.597 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:30:19 np0005558317 nova_compute[241222]: 2025-12-13 07:30:19.597 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:30:19 np0005558317 nova_compute[241222]: 2025-12-13 07:30:19.597 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 02:30:19 np0005558317 nova_compute[241222]: 2025-12-13 07:30:19.597 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 02:30:19 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 02:30:19 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/802453350' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 02:30:19 np0005558317 nova_compute[241222]: 2025-12-13 07:30:19.996 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 02:30:20 np0005558317 nova_compute[241222]: 2025-12-13 07:30:20.204 241226 WARNING nova.virt.libvirt.driver [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 02:30:20 np0005558317 nova_compute[241222]: 2025-12-13 07:30:20.205 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5170MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 02:30:20 np0005558317 nova_compute[241222]: 2025-12-13 07:30:20.206 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:30:20 np0005558317 nova_compute[241222]: 2025-12-13 07:30:20.206 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:30:20 np0005558317 nova_compute[241222]: 2025-12-13 07:30:20.261 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 02:30:20 np0005558317 nova_compute[241222]: 2025-12-13 07:30:20.261 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 02:30:20 np0005558317 nova_compute[241222]: 2025-12-13 07:30:20.273 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 02:30:20 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 5 op/s
Dec 13 02:30:20 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 02:30:20 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2029954754' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 02:30:20 np0005558317 nova_compute[241222]: 2025-12-13 07:30:20.683 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 02:30:20 np0005558317 nova_compute[241222]: 2025-12-13 07:30:20.687 241226 DEBUG nova.compute.provider_tree [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed in ProviderTree for provider: 1d614cf3-e40f-4742-a628-7a61041be9be update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 02:30:20 np0005558317 nova_compute[241222]: 2025-12-13 07:30:20.710 241226 DEBUG nova.scheduler.client.report [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed for provider 1d614cf3-e40f-4742-a628-7a61041be9be based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 02:30:20 np0005558317 nova_compute[241222]: 2025-12-13 07:30:20.711 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 02:30:20 np0005558317 nova_compute[241222]: 2025-12-13 07:30:20.711 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:30:22 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:30:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:30:24 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:30:26 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:30:26 np0005558317 podman[242274]: 2025-12-13 07:30:26.714991848 +0000 UTC m=+0.057272566 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 02:30:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:30:28 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:30:30 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:30:32 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:30:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:30:34 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:30:36 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:30:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:30:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:30:38
Dec 13 02:30:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:30:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 02:30:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'vms', 'backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root']
Dec 13 02:30:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:30:38 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:30:38 np0005558317 podman[242297]: 2025-12-13 07:30:38.69816239 +0000 UTC m=+0.040051613 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 02:30:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:30:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:30:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:30:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:30:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:30:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:30:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:30:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:30:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:30:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:30:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:30:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:30:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:30:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:30:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:30:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:30:40 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:30:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:30:41.639 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:30:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:30:41.639 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:30:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:30:41.639 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:30:42 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:30:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:30:44 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:30:46 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:30:47 np0005558317 podman[242314]: 2025-12-13 07:30:47.69297754 +0000 UTC m=+0.036013871 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.803558) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611047803623, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1705, "num_deletes": 250, "total_data_size": 2818792, "memory_usage": 2855192, "flush_reason": "Manual Compaction"}
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611047808082, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1607824, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11728, "largest_seqno": 13432, "table_properties": {"data_size": 1602211, "index_size": 2753, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14431, "raw_average_key_size": 20, "raw_value_size": 1589695, "raw_average_value_size": 2226, "num_data_blocks": 127, "num_entries": 714, "num_filter_entries": 714, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610861, "oldest_key_time": 1765610861, "file_creation_time": 1765611047, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 4538 microseconds, and 3313 cpu microseconds.
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.808104) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1607824 bytes OK
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.808117) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.808630) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.808642) EVENT_LOG_v1 {"time_micros": 1765611047808639, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.808651) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2811479, prev total WAL file size 2811479, number of live WAL files 2.
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.809225) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1570KB)], [29(8027KB)]
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611047809257, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9828122, "oldest_snapshot_seqno": -1}
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 3980 keys, 7643997 bytes, temperature: kUnknown
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611047824570, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7643997, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7615665, "index_size": 17278, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 94801, "raw_average_key_size": 23, "raw_value_size": 7542171, "raw_average_value_size": 1895, "num_data_blocks": 754, "num_entries": 3980, "num_filter_entries": 3980, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610001, "oldest_key_time": 0, "file_creation_time": 1765611047, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.824697) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7643997 bytes
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.825081) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 640.4 rd, 498.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.8 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(10.9) write-amplify(4.8) OK, records in: 4405, records dropped: 425 output_compression: NoCompression
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.825096) EVENT_LOG_v1 {"time_micros": 1765611047825088, "job": 12, "event": "compaction_finished", "compaction_time_micros": 15348, "compaction_time_cpu_micros": 12689, "output_level": 6, "num_output_files": 1, "total_output_size": 7643997, "num_input_records": 4405, "num_output_records": 3980, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611047825322, "job": 12, "event": "table_file_deletion", "file_number": 31}
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611047826320, "job": 12, "event": "table_file_deletion", "file_number": 29}
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.809164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.826337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.826339) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.826340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.826341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:30:47 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.826342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:30:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:30:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:30:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:30:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:30:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:30:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:30:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:30:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:30:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:30:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:30:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:30:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:30:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 02:30:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:30:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:30:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:30:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:30:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:30:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:30:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:30:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:30:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:30:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:30:48 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:30:50 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:30:52 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:30:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:30:54 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:30:56 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:30:57 np0005558317 podman[242330]: 2025-12-13 07:30:57.716187818 +0000 UTC m=+0.058325525 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec 13 02:30:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:30:58 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:00 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:02 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:31:04 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:06 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:31:08 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:31:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:31:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:31:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:31:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:31:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:31:09 np0005558317 podman[242353]: 2025-12-13 07:31:09.697209383 +0000 UTC m=+0.036939963 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:31:10 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:12 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:31:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:31:12 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:31:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:31:12 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:31:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:31:12 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:31:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:31:12 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:31:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:31:12 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:31:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:31:12 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:31:13 np0005558317 podman[242509]: 2025-12-13 07:31:13.287043225 +0000 UTC m=+0.026529087 container create 76899710537dcff46b45a231e73be8c9953e8e3a73684d64542bdb99ee08fee1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 02:31:13 np0005558317 systemd[1]: Started libpod-conmon-76899710537dcff46b45a231e73be8c9953e8e3a73684d64542bdb99ee08fee1.scope.
Dec 13 02:31:13 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:31:13 np0005558317 podman[242509]: 2025-12-13 07:31:13.341339955 +0000 UTC m=+0.080825816 container init 76899710537dcff46b45a231e73be8c9953e8e3a73684d64542bdb99ee08fee1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Dec 13 02:31:13 np0005558317 podman[242509]: 2025-12-13 07:31:13.345859524 +0000 UTC m=+0.085345385 container start 76899710537dcff46b45a231e73be8c9953e8e3a73684d64542bdb99ee08fee1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:31:13 np0005558317 podman[242509]: 2025-12-13 07:31:13.347022189 +0000 UTC m=+0.086508051 container attach 76899710537dcff46b45a231e73be8c9953e8e3a73684d64542bdb99ee08fee1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_beaver, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:31:13 np0005558317 optimistic_beaver[242523]: 167 167
Dec 13 02:31:13 np0005558317 systemd[1]: libpod-76899710537dcff46b45a231e73be8c9953e8e3a73684d64542bdb99ee08fee1.scope: Deactivated successfully.
Dec 13 02:31:13 np0005558317 conmon[242523]: conmon 76899710537dcff46b45 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-76899710537dcff46b45a231e73be8c9953e8e3a73684d64542bdb99ee08fee1.scope/container/memory.events
Dec 13 02:31:13 np0005558317 podman[242509]: 2025-12-13 07:31:13.350562718 +0000 UTC m=+0.090048578 container died 76899710537dcff46b45a231e73be8c9953e8e3a73684d64542bdb99ee08fee1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_beaver, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:31:13 np0005558317 systemd[1]: var-lib-containers-storage-overlay-a7d87d158f9c036a2747833ad73b7a2fd3fcfd62f622219b960daf4752e1569f-merged.mount: Deactivated successfully.
Dec 13 02:31:13 np0005558317 podman[242509]: 2025-12-13 07:31:13.372004897 +0000 UTC m=+0.111490759 container remove 76899710537dcff46b45a231e73be8c9953e8e3a73684d64542bdb99ee08fee1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 02:31:13 np0005558317 podman[242509]: 2025-12-13 07:31:13.27574985 +0000 UTC m=+0.015235731 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:31:13 np0005558317 systemd[1]: libpod-conmon-76899710537dcff46b45a231e73be8c9953e8e3a73684d64542bdb99ee08fee1.scope: Deactivated successfully.
Dec 13 02:31:13 np0005558317 podman[242544]: 2025-12-13 07:31:13.491989456 +0000 UTC m=+0.028416164 container create 3d84250508cfe564086d9f7366f63c72cc08034496aff18b12a3b304f362809d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:31:13 np0005558317 systemd[1]: Started libpod-conmon-3d84250508cfe564086d9f7366f63c72cc08034496aff18b12a3b304f362809d.scope.
Dec 13 02:31:13 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:31:13 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b29b3ee99d82ea9ebfeaa492d01661f1b110615d1d575750a5379ae3642a0f95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:13 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b29b3ee99d82ea9ebfeaa492d01661f1b110615d1d575750a5379ae3642a0f95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:13 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b29b3ee99d82ea9ebfeaa492d01661f1b110615d1d575750a5379ae3642a0f95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:13 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b29b3ee99d82ea9ebfeaa492d01661f1b110615d1d575750a5379ae3642a0f95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:13 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b29b3ee99d82ea9ebfeaa492d01661f1b110615d1d575750a5379ae3642a0f95/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:13 np0005558317 podman[242544]: 2025-12-13 07:31:13.566097669 +0000 UTC m=+0.102524378 container init 3d84250508cfe564086d9f7366f63c72cc08034496aff18b12a3b304f362809d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 02:31:13 np0005558317 podman[242544]: 2025-12-13 07:31:13.570800221 +0000 UTC m=+0.107226931 container start 3d84250508cfe564086d9f7366f63c72cc08034496aff18b12a3b304f362809d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:31:13 np0005558317 podman[242544]: 2025-12-13 07:31:13.57193222 +0000 UTC m=+0.108358928 container attach 3d84250508cfe564086d9f7366f63c72cc08034496aff18b12a3b304f362809d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:31:13 np0005558317 podman[242544]: 2025-12-13 07:31:13.480937794 +0000 UTC m=+0.017364514 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:31:13 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:31:13 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:31:13 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:31:13 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 02:31:13 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4261629339' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 02:31:13 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 02:31:13 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4261629339' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 02:31:13 np0005558317 hungry_bhaskara[242557]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:31:13 np0005558317 hungry_bhaskara[242557]: --> All data devices are unavailable
Dec 13 02:31:13 np0005558317 systemd[1]: libpod-3d84250508cfe564086d9f7366f63c72cc08034496aff18b12a3b304f362809d.scope: Deactivated successfully.
Dec 13 02:31:13 np0005558317 podman[242544]: 2025-12-13 07:31:13.958350428 +0000 UTC m=+0.494777147 container died 3d84250508cfe564086d9f7366f63c72cc08034496aff18b12a3b304f362809d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 02:31:13 np0005558317 systemd[1]: var-lib-containers-storage-overlay-b29b3ee99d82ea9ebfeaa492d01661f1b110615d1d575750a5379ae3642a0f95-merged.mount: Deactivated successfully.
Dec 13 02:31:13 np0005558317 podman[242544]: 2025-12-13 07:31:13.982358642 +0000 UTC m=+0.518785352 container remove 3d84250508cfe564086d9f7366f63c72cc08034496aff18b12a3b304f362809d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bhaskara, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 02:31:13 np0005558317 systemd[1]: libpod-conmon-3d84250508cfe564086d9f7366f63c72cc08034496aff18b12a3b304f362809d.scope: Deactivated successfully.
Dec 13 02:31:14 np0005558317 podman[242648]: 2025-12-13 07:31:14.307247081 +0000 UTC m=+0.027526431 container create eda81074bb97e25cae4d9b9566d4679a1578db171aa0135296c372e4e506fa6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_gould, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 02:31:14 np0005558317 systemd[1]: Started libpod-conmon-eda81074bb97e25cae4d9b9566d4679a1578db171aa0135296c372e4e506fa6b.scope.
Dec 13 02:31:14 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:31:14 np0005558317 podman[242648]: 2025-12-13 07:31:14.363087853 +0000 UTC m=+0.083367203 container init eda81074bb97e25cae4d9b9566d4679a1578db171aa0135296c372e4e506fa6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 02:31:14 np0005558317 podman[242648]: 2025-12-13 07:31:14.368399542 +0000 UTC m=+0.088678882 container start eda81074bb97e25cae4d9b9566d4679a1578db171aa0135296c372e4e506fa6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_gould, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 02:31:14 np0005558317 podman[242648]: 2025-12-13 07:31:14.369715256 +0000 UTC m=+0.089994606 container attach eda81074bb97e25cae4d9b9566d4679a1578db171aa0135296c372e4e506fa6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 02:31:14 np0005558317 modest_gould[242661]: 167 167
Dec 13 02:31:14 np0005558317 systemd[1]: libpod-eda81074bb97e25cae4d9b9566d4679a1578db171aa0135296c372e4e506fa6b.scope: Deactivated successfully.
Dec 13 02:31:14 np0005558317 podman[242648]: 2025-12-13 07:31:14.373111702 +0000 UTC m=+0.093391042 container died eda81074bb97e25cae4d9b9566d4679a1578db171aa0135296c372e4e506fa6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_gould, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 02:31:14 np0005558317 systemd[1]: var-lib-containers-storage-overlay-fa456750402eea653060f990172567f32da9c8388ad7c9758d86c2bb63b61fb7-merged.mount: Deactivated successfully.
Dec 13 02:31:14 np0005558317 podman[242648]: 2025-12-13 07:31:14.39012779 +0000 UTC m=+0.110407120 container remove eda81074bb97e25cae4d9b9566d4679a1578db171aa0135296c372e4e506fa6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 02:31:14 np0005558317 podman[242648]: 2025-12-13 07:31:14.296273266 +0000 UTC m=+0.016552626 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:31:14 np0005558317 systemd[1]: libpod-conmon-eda81074bb97e25cae4d9b9566d4679a1578db171aa0135296c372e4e506fa6b.scope: Deactivated successfully.
Dec 13 02:31:14 np0005558317 podman[242683]: 2025-12-13 07:31:14.51254917 +0000 UTC m=+0.028155565 container create c7d4d16e13cd8220931c8f06999a60bbf0d6bc90a70247a23523ed924dc5c402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Dec 13 02:31:14 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:14 np0005558317 systemd[1]: Started libpod-conmon-c7d4d16e13cd8220931c8f06999a60bbf0d6bc90a70247a23523ed924dc5c402.scope.
Dec 13 02:31:14 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:31:14 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9de35614a34bbc5461f56d2ed10d3bdf2582f6d2b21e2b62508dae1aaf12a04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:14 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9de35614a34bbc5461f56d2ed10d3bdf2582f6d2b21e2b62508dae1aaf12a04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:14 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9de35614a34bbc5461f56d2ed10d3bdf2582f6d2b21e2b62508dae1aaf12a04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:14 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9de35614a34bbc5461f56d2ed10d3bdf2582f6d2b21e2b62508dae1aaf12a04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:14 np0005558317 podman[242683]: 2025-12-13 07:31:14.561858421 +0000 UTC m=+0.077464816 container init c7d4d16e13cd8220931c8f06999a60bbf0d6bc90a70247a23523ed924dc5c402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mcnulty, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 02:31:14 np0005558317 podman[242683]: 2025-12-13 07:31:14.567613322 +0000 UTC m=+0.083219707 container start c7d4d16e13cd8220931c8f06999a60bbf0d6bc90a70247a23523ed924dc5c402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mcnulty, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:31:14 np0005558317 podman[242683]: 2025-12-13 07:31:14.569003677 +0000 UTC m=+0.084610072 container attach c7d4d16e13cd8220931c8f06999a60bbf0d6bc90a70247a23523ed924dc5c402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mcnulty, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 02:31:14 np0005558317 podman[242683]: 2025-12-13 07:31:14.501520933 +0000 UTC m=+0.017127328 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]: {
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:    "0": [
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:        {
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "devices": [
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "/dev/loop3"
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            ],
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "lv_name": "ceph_lv0",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "lv_size": "21470642176",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "name": "ceph_lv0",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "tags": {
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.cluster_name": "ceph",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.crush_device_class": "",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.encrypted": "0",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.objectstore": "bluestore",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.osd_id": "0",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.type": "block",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.vdo": "0",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.with_tpm": "0"
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            },
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "type": "block",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "vg_name": "ceph_vg0"
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:        }
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:    ],
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:    "1": [
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:        {
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "devices": [
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "/dev/loop4"
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            ],
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "lv_name": "ceph_lv1",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "lv_size": "21470642176",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "name": "ceph_lv1",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "tags": {
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.cluster_name": "ceph",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.crush_device_class": "",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.encrypted": "0",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.objectstore": "bluestore",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.osd_id": "1",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.type": "block",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.vdo": "0",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.with_tpm": "0"
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            },
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "type": "block",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "vg_name": "ceph_vg1"
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:        }
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:    ],
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:    "2": [
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:        {
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "devices": [
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "/dev/loop5"
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            ],
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "lv_name": "ceph_lv2",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "lv_size": "21470642176",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "name": "ceph_lv2",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "tags": {
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.cluster_name": "ceph",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.crush_device_class": "",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.encrypted": "0",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.objectstore": "bluestore",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.osd_id": "2",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.type": "block",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.vdo": "0",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:                "ceph.with_tpm": "0"
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            },
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "type": "block",
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:            "vg_name": "ceph_vg2"
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:        }
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]:    ]
Dec 13 02:31:14 np0005558317 tender_mcnulty[242696]: }
Dec 13 02:31:14 np0005558317 systemd[1]: libpod-c7d4d16e13cd8220931c8f06999a60bbf0d6bc90a70247a23523ed924dc5c402.scope: Deactivated successfully.
Dec 13 02:31:14 np0005558317 podman[242683]: 2025-12-13 07:31:14.808934341 +0000 UTC m=+0.324540736 container died c7d4d16e13cd8220931c8f06999a60bbf0d6bc90a70247a23523ed924dc5c402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mcnulty, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 02:31:14 np0005558317 systemd[1]: var-lib-containers-storage-overlay-e9de35614a34bbc5461f56d2ed10d3bdf2582f6d2b21e2b62508dae1aaf12a04-merged.mount: Deactivated successfully.
Dec 13 02:31:14 np0005558317 podman[242683]: 2025-12-13 07:31:14.838099764 +0000 UTC m=+0.353706159 container remove c7d4d16e13cd8220931c8f06999a60bbf0d6bc90a70247a23523ed924dc5c402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:31:14 np0005558317 systemd[1]: libpod-conmon-c7d4d16e13cd8220931c8f06999a60bbf0d6bc90a70247a23523ed924dc5c402.scope: Deactivated successfully.
Dec 13 02:31:15 np0005558317 podman[242775]: 2025-12-13 07:31:15.162701342 +0000 UTC m=+0.025743770 container create 4376159b9f85af721ef8ca163d004656386b89fa1ebe19a833588b5dfd72e86f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ptolemy, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Dec 13 02:31:15 np0005558317 systemd[1]: Started libpod-conmon-4376159b9f85af721ef8ca163d004656386b89fa1ebe19a833588b5dfd72e86f.scope.
Dec 13 02:31:15 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:31:15 np0005558317 podman[242775]: 2025-12-13 07:31:15.214913763 +0000 UTC m=+0.077956200 container init 4376159b9f85af721ef8ca163d004656386b89fa1ebe19a833588b5dfd72e86f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 02:31:15 np0005558317 podman[242775]: 2025-12-13 07:31:15.220025846 +0000 UTC m=+0.083068272 container start 4376159b9f85af721ef8ca163d004656386b89fa1ebe19a833588b5dfd72e86f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:31:15 np0005558317 podman[242775]: 2025-12-13 07:31:15.221194533 +0000 UTC m=+0.084236961 container attach 4376159b9f85af721ef8ca163d004656386b89fa1ebe19a833588b5dfd72e86f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:31:15 np0005558317 magical_ptolemy[242788]: 167 167
Dec 13 02:31:15 np0005558317 systemd[1]: libpod-4376159b9f85af721ef8ca163d004656386b89fa1ebe19a833588b5dfd72e86f.scope: Deactivated successfully.
Dec 13 02:31:15 np0005558317 conmon[242788]: conmon 4376159b9f85af721ef8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4376159b9f85af721ef8ca163d004656386b89fa1ebe19a833588b5dfd72e86f.scope/container/memory.events
Dec 13 02:31:15 np0005558317 podman[242775]: 2025-12-13 07:31:15.224362049 +0000 UTC m=+0.087404496 container died 4376159b9f85af721ef8ca163d004656386b89fa1ebe19a833588b5dfd72e86f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:31:15 np0005558317 podman[242775]: 2025-12-13 07:31:15.240205331 +0000 UTC m=+0.103247758 container remove 4376159b9f85af721ef8ca163d004656386b89fa1ebe19a833588b5dfd72e86f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 02:31:15 np0005558317 podman[242775]: 2025-12-13 07:31:15.152754199 +0000 UTC m=+0.015796645 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:31:15 np0005558317 systemd[1]: libpod-conmon-4376159b9f85af721ef8ca163d004656386b89fa1ebe19a833588b5dfd72e86f.scope: Deactivated successfully.
Dec 13 02:31:15 np0005558317 systemd[1]: var-lib-containers-storage-overlay-98c6b0153a96164287a981d159aa3c0f83436e9760a7c29f14ca13c0bd117a09-merged.mount: Deactivated successfully.
Dec 13 02:31:15 np0005558317 podman[242810]: 2025-12-13 07:31:15.359962521 +0000 UTC m=+0.026684719 container create a9356c14b3db2c7d7c65ec8b74580e4237dcfacfe36287f5ab32ed2017fcab8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_khorana, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:31:15 np0005558317 systemd[1]: Started libpod-conmon-a9356c14b3db2c7d7c65ec8b74580e4237dcfacfe36287f5ab32ed2017fcab8c.scope.
Dec 13 02:31:15 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:31:15 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f653fe7a1d76048796942833fa5d9bee5f1e40c34907644f1091639ba40c18e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:15 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f653fe7a1d76048796942833fa5d9bee5f1e40c34907644f1091639ba40c18e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:15 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f653fe7a1d76048796942833fa5d9bee5f1e40c34907644f1091639ba40c18e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:15 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f653fe7a1d76048796942833fa5d9bee5f1e40c34907644f1091639ba40c18e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:15 np0005558317 podman[242810]: 2025-12-13 07:31:15.424848513 +0000 UTC m=+0.091570721 container init a9356c14b3db2c7d7c65ec8b74580e4237dcfacfe36287f5ab32ed2017fcab8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 02:31:15 np0005558317 podman[242810]: 2025-12-13 07:31:15.429567717 +0000 UTC m=+0.096289915 container start a9356c14b3db2c7d7c65ec8b74580e4237dcfacfe36287f5ab32ed2017fcab8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_khorana, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:31:15 np0005558317 podman[242810]: 2025-12-13 07:31:15.430798221 +0000 UTC m=+0.097520439 container attach a9356c14b3db2c7d7c65ec8b74580e4237dcfacfe36287f5ab32ed2017fcab8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:31:15 np0005558317 podman[242810]: 2025-12-13 07:31:15.349756912 +0000 UTC m=+0.016479129 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:31:15 np0005558317 lvm[242901]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:31:15 np0005558317 lvm[242902]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:31:15 np0005558317 lvm[242901]: VG ceph_vg0 finished
Dec 13 02:31:15 np0005558317 lvm[242902]: VG ceph_vg1 finished
Dec 13 02:31:15 np0005558317 lvm[242905]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:31:15 np0005558317 lvm[242905]: VG ceph_vg2 finished
Dec 13 02:31:16 np0005558317 hopeful_khorana[242824]: {}
Dec 13 02:31:16 np0005558317 systemd[1]: libpod-a9356c14b3db2c7d7c65ec8b74580e4237dcfacfe36287f5ab32ed2017fcab8c.scope: Deactivated successfully.
Dec 13 02:31:16 np0005558317 podman[242810]: 2025-12-13 07:31:16.040784846 +0000 UTC m=+0.707507044 container died a9356c14b3db2c7d7c65ec8b74580e4237dcfacfe36287f5ab32ed2017fcab8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 02:31:16 np0005558317 systemd[1]: var-lib-containers-storage-overlay-2f653fe7a1d76048796942833fa5d9bee5f1e40c34907644f1091639ba40c18e-merged.mount: Deactivated successfully.
Dec 13 02:31:16 np0005558317 podman[242810]: 2025-12-13 07:31:16.067309403 +0000 UTC m=+0.734031601 container remove a9356c14b3db2c7d7c65ec8b74580e4237dcfacfe36287f5ab32ed2017fcab8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_khorana, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:31:16 np0005558317 systemd[1]: libpod-conmon-a9356c14b3db2c7d7c65ec8b74580e4237dcfacfe36287f5ab32ed2017fcab8c.scope: Deactivated successfully.
Dec 13 02:31:16 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:31:16 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:31:16 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:31:16 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:31:16 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:17 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:31:17 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:31:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:31:18 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:18 np0005558317 podman[242942]: 2025-12-13 07:31:18.696982275 +0000 UTC m=+0.040520374 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 02:31:20 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:20 np0005558317 nova_compute[241222]: 2025-12-13 07:31:20.705 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:31:20 np0005558317 nova_compute[241222]: 2025-12-13 07:31:20.723 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:31:20 np0005558317 nova_compute[241222]: 2025-12-13 07:31:20.723 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 02:31:20 np0005558317 nova_compute[241222]: 2025-12-13 07:31:20.724 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 02:31:20 np0005558317 nova_compute[241222]: 2025-12-13 07:31:20.732 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 02:31:20 np0005558317 nova_compute[241222]: 2025-12-13 07:31:20.732 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:31:20 np0005558317 nova_compute[241222]: 2025-12-13 07:31:20.732 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 02:31:20 np0005558317 nova_compute[241222]: 2025-12-13 07:31:20.732 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:31:20 np0005558317 nova_compute[241222]: 2025-12-13 07:31:20.747 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:31:20 np0005558317 nova_compute[241222]: 2025-12-13 07:31:20.747 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:31:20 np0005558317 nova_compute[241222]: 2025-12-13 07:31:20.747 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:31:20 np0005558317 nova_compute[241222]: 2025-12-13 07:31:20.747 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 02:31:20 np0005558317 nova_compute[241222]: 2025-12-13 07:31:20.748 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 02:31:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 02:31:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3426600084' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 02:31:21 np0005558317 nova_compute[241222]: 2025-12-13 07:31:21.154 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 02:31:21 np0005558317 nova_compute[241222]: 2025-12-13 07:31:21.351 241226 WARNING nova.virt.libvirt.driver [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 02:31:21 np0005558317 nova_compute[241222]: 2025-12-13 07:31:21.352 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5153MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 02:31:21 np0005558317 nova_compute[241222]: 2025-12-13 07:31:21.352 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:31:21 np0005558317 nova_compute[241222]: 2025-12-13 07:31:21.352 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:31:21 np0005558317 nova_compute[241222]: 2025-12-13 07:31:21.422 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 02:31:21 np0005558317 nova_compute[241222]: 2025-12-13 07:31:21.422 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 02:31:21 np0005558317 nova_compute[241222]: 2025-12-13 07:31:21.433 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 02:31:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 02:31:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/257441879' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 02:31:21 np0005558317 nova_compute[241222]: 2025-12-13 07:31:21.841 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 02:31:21 np0005558317 nova_compute[241222]: 2025-12-13 07:31:21.844 241226 DEBUG nova.compute.provider_tree [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed in ProviderTree for provider: 1d614cf3-e40f-4742-a628-7a61041be9be update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 02:31:21 np0005558317 nova_compute[241222]: 2025-12-13 07:31:21.855 241226 DEBUG nova.scheduler.client.report [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed for provider 1d614cf3-e40f-4742-a628-7a61041be9be based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 02:31:21 np0005558317 nova_compute[241222]: 2025-12-13 07:31:21.856 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 02:31:21 np0005558317 nova_compute[241222]: 2025-12-13 07:31:21.857 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.504s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:31:22 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:22 np0005558317 nova_compute[241222]: 2025-12-13 07:31:22.692 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:31:22 np0005558317 nova_compute[241222]: 2025-12-13 07:31:22.692 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:31:22 np0005558317 nova_compute[241222]: 2025-12-13 07:31:22.692 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:31:22 np0005558317 nova_compute[241222]: 2025-12-13 07:31:22.693 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:31:22 np0005558317 nova_compute[241222]: 2025-12-13 07:31:22.693 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:31:22 np0005558317 nova_compute[241222]: 2025-12-13 07:31:22.693 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:31:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:31:24 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:26 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:31:28 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:28 np0005558317 podman[243003]: 2025-12-13 07:31:28.712254437 +0000 UTC m=+0.055575505 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Dec 13 02:31:30 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:32 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:31:34 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:36 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:36 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:31:36.986 154121 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:fb:39', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ae:1b:16:aa:9c:6c'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 02:31:36 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:31:36.987 154121 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 02:31:36 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:31:36.987 154121 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=075cc82e-193d-47f2-a248-9917472f5475, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 02:31:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:31:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:31:38
Dec 13 02:31:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:31:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 02:31:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] pools ['vms', 'images', 'volumes', '.mgr', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'default.rgw.control']
Dec 13 02:31:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:31:38 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:31:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:31:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:31:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:31:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:31:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:31:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:31:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:31:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:31:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:31:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:31:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:31:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:31:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:31:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:31:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:31:40 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:40 np0005558317 podman[243026]: 2025-12-13 07:31:40.602574007 +0000 UTC m=+0.039667746 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Dec 13 02:31:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:31:41.639 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:31:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:31:41.640 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:31:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:31:41.640 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:31:42 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:31:44 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:46 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:31:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:31:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:31:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:31:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:31:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:31:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:31:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:31:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:31:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:31:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:31:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:31:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:31:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 02:31:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:31:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:31:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:31:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:31:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:31:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:31:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:31:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:31:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:31:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:31:48 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:49 np0005558317 podman[243043]: 2025-12-13 07:31:49.687418935 +0000 UTC m=+0.029602383 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 02:31:50 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:52 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:31:54 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:56 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:31:58 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:31:59 np0005558317 podman[243059]: 2025-12-13 07:31:59.714988495 +0000 UTC m=+0.058106791 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 02:32:00 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:02 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:32:04 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:06 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:32:08 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:32:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:32:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:32:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:32:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:32:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:32:10 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:10 np0005558317 podman[243082]: 2025-12-13 07:32:10.695951135 +0000 UTC m=+0.039193193 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd)
Dec 13 02:32:12 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:32:13 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 02:32:13 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/554917468' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 02:32:13 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 02:32:13 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/554917468' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 02:32:14 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:16 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:16 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:32:16 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:32:16 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:32:16 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:32:16 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:32:16 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:32:16 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:32:16 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:32:16 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:32:16 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:32:16 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:32:16 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:32:16 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:32:16 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:32:16 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:32:16 np0005558317 podman[243240]: 2025-12-13 07:32:16.949514327 +0000 UTC m=+0.027146536 container create 94f9b2cd60605c89a09fad5f9496c713c8ccd4eebc11d199f0036e5fb16f7120 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jepsen, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 02:32:16 np0005558317 systemd[1]: Started libpod-conmon-94f9b2cd60605c89a09fad5f9496c713c8ccd4eebc11d199f0036e5fb16f7120.scope.
Dec 13 02:32:16 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:32:17 np0005558317 podman[243240]: 2025-12-13 07:32:17.003023503 +0000 UTC m=+0.080655712 container init 94f9b2cd60605c89a09fad5f9496c713c8ccd4eebc11d199f0036e5fb16f7120 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jepsen, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 02:32:17 np0005558317 podman[243240]: 2025-12-13 07:32:17.008322164 +0000 UTC m=+0.085954373 container start 94f9b2cd60605c89a09fad5f9496c713c8ccd4eebc11d199f0036e5fb16f7120 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 02:32:17 np0005558317 podman[243240]: 2025-12-13 07:32:17.00937295 +0000 UTC m=+0.087005159 container attach 94f9b2cd60605c89a09fad5f9496c713c8ccd4eebc11d199f0036e5fb16f7120 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jepsen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec 13 02:32:17 np0005558317 gallant_jepsen[243253]: 167 167
Dec 13 02:32:17 np0005558317 systemd[1]: libpod-94f9b2cd60605c89a09fad5f9496c713c8ccd4eebc11d199f0036e5fb16f7120.scope: Deactivated successfully.
Dec 13 02:32:17 np0005558317 podman[243240]: 2025-12-13 07:32:17.011933444 +0000 UTC m=+0.089565652 container died 94f9b2cd60605c89a09fad5f9496c713c8ccd4eebc11d199f0036e5fb16f7120 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jepsen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:32:17 np0005558317 systemd[1]: var-lib-containers-storage-overlay-7139f97ed23a1e85fe664990a028278c8dab2d9e1796a751266eba50248dfce2-merged.mount: Deactivated successfully.
Dec 13 02:32:17 np0005558317 podman[243240]: 2025-12-13 07:32:17.0285112 +0000 UTC m=+0.106143399 container remove 94f9b2cd60605c89a09fad5f9496c713c8ccd4eebc11d199f0036e5fb16f7120 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jepsen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 02:32:17 np0005558317 podman[243240]: 2025-12-13 07:32:16.93775012 +0000 UTC m=+0.015382349 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:17 np0005558317 systemd[1]: libpod-conmon-94f9b2cd60605c89a09fad5f9496c713c8ccd4eebc11d199f0036e5fb16f7120.scope: Deactivated successfully.
Dec 13 02:32:17 np0005558317 podman[243275]: 2025-12-13 07:32:17.147363389 +0000 UTC m=+0.028381648 container create be3b964643cce098266f993155d5067de8547dbb2dab97824b3d5fbcc1269234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 02:32:17 np0005558317 systemd[1]: Started libpod-conmon-be3b964643cce098266f993155d5067de8547dbb2dab97824b3d5fbcc1269234.scope.
Dec 13 02:32:17 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:32:17 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196206feda95f3b75c452a39a247dbd3bd197551dca3e5c975daab4a145f5cf4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:17 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196206feda95f3b75c452a39a247dbd3bd197551dca3e5c975daab4a145f5cf4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:17 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196206feda95f3b75c452a39a247dbd3bd197551dca3e5c975daab4a145f5cf4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:17 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196206feda95f3b75c452a39a247dbd3bd197551dca3e5c975daab4a145f5cf4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:17 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196206feda95f3b75c452a39a247dbd3bd197551dca3e5c975daab4a145f5cf4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:17 np0005558317 podman[243275]: 2025-12-13 07:32:17.211505014 +0000 UTC m=+0.092523292 container init be3b964643cce098266f993155d5067de8547dbb2dab97824b3d5fbcc1269234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_antonelli, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:32:17 np0005558317 podman[243275]: 2025-12-13 07:32:17.218155927 +0000 UTC m=+0.099174185 container start be3b964643cce098266f993155d5067de8547dbb2dab97824b3d5fbcc1269234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:32:17 np0005558317 podman[243275]: 2025-12-13 07:32:17.219132483 +0000 UTC m=+0.100150751 container attach be3b964643cce098266f993155d5067de8547dbb2dab97824b3d5fbcc1269234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_antonelli, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:32:17 np0005558317 podman[243275]: 2025-12-13 07:32:17.136498243 +0000 UTC m=+0.017516511 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:17 np0005558317 jolly_antonelli[243288]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:32:17 np0005558317 jolly_antonelli[243288]: --> All data devices are unavailable
Dec 13 02:32:17 np0005558317 systemd[1]: libpod-be3b964643cce098266f993155d5067de8547dbb2dab97824b3d5fbcc1269234.scope: Deactivated successfully.
Dec 13 02:32:17 np0005558317 podman[243308]: 2025-12-13 07:32:17.596717365 +0000 UTC m=+0.016579179 container died be3b964643cce098266f993155d5067de8547dbb2dab97824b3d5fbcc1269234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 02:32:17 np0005558317 systemd[1]: var-lib-containers-storage-overlay-196206feda95f3b75c452a39a247dbd3bd197551dca3e5c975daab4a145f5cf4-merged.mount: Deactivated successfully.
Dec 13 02:32:17 np0005558317 podman[243308]: 2025-12-13 07:32:17.614277747 +0000 UTC m=+0.034139561 container remove be3b964643cce098266f993155d5067de8547dbb2dab97824b3d5fbcc1269234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_antonelli, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:32:17 np0005558317 systemd[1]: libpod-conmon-be3b964643cce098266f993155d5067de8547dbb2dab97824b3d5fbcc1269234.scope: Deactivated successfully.
Dec 13 02:32:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:32:17 np0005558317 podman[243380]: 2025-12-13 07:32:17.957271509 +0000 UTC m=+0.029171924 container create 7cc89ae3c5706b19cfdb6ce824d1b680d8efe07c02d1bd77e5a66ca0342821d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kare, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 02:32:17 np0005558317 systemd[1]: Started libpod-conmon-7cc89ae3c5706b19cfdb6ce824d1b680d8efe07c02d1bd77e5a66ca0342821d5.scope.
Dec 13 02:32:17 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:32:18 np0005558317 podman[243380]: 2025-12-13 07:32:18.003279032 +0000 UTC m=+0.075179447 container init 7cc89ae3c5706b19cfdb6ce824d1b680d8efe07c02d1bd77e5a66ca0342821d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kare, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:32:18 np0005558317 podman[243380]: 2025-12-13 07:32:18.008224259 +0000 UTC m=+0.080124674 container start 7cc89ae3c5706b19cfdb6ce824d1b680d8efe07c02d1bd77e5a66ca0342821d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kare, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 02:32:18 np0005558317 podman[243380]: 2025-12-13 07:32:18.00921978 +0000 UTC m=+0.081120195 container attach 7cc89ae3c5706b19cfdb6ce824d1b680d8efe07c02d1bd77e5a66ca0342821d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 02:32:18 np0005558317 jovial_kare[243393]: 167 167
Dec 13 02:32:18 np0005558317 systemd[1]: libpod-7cc89ae3c5706b19cfdb6ce824d1b680d8efe07c02d1bd77e5a66ca0342821d5.scope: Deactivated successfully.
Dec 13 02:32:18 np0005558317 podman[243380]: 2025-12-13 07:32:18.01115614 +0000 UTC m=+0.083056555 container died 7cc89ae3c5706b19cfdb6ce824d1b680d8efe07c02d1bd77e5a66ca0342821d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:32:18 np0005558317 systemd[1]: var-lib-containers-storage-overlay-f739e3e8752c27ef72b6064870a925084ba5946a0b4c8a3a00da55a1868346a0-merged.mount: Deactivated successfully.
Dec 13 02:32:18 np0005558317 podman[243380]: 2025-12-13 07:32:18.027339895 +0000 UTC m=+0.099240310 container remove 7cc89ae3c5706b19cfdb6ce824d1b680d8efe07c02d1bd77e5a66ca0342821d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kare, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:32:18 np0005558317 podman[243380]: 2025-12-13 07:32:17.945161962 +0000 UTC m=+0.017062397 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:18 np0005558317 systemd[1]: libpod-conmon-7cc89ae3c5706b19cfdb6ce824d1b680d8efe07c02d1bd77e5a66ca0342821d5.scope: Deactivated successfully.
Dec 13 02:32:18 np0005558317 podman[243415]: 2025-12-13 07:32:18.147195912 +0000 UTC m=+0.028716167 container create a9b56310117cdf37207cc6838efaea3c9c98e4fc5893d925db4a33dcf7f15ceb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_buck, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 02:32:18 np0005558317 systemd[1]: Started libpod-conmon-a9b56310117cdf37207cc6838efaea3c9c98e4fc5893d925db4a33dcf7f15ceb.scope.
Dec 13 02:32:18 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:32:18 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b72566f2673731037b2f685013051ddf2363a9eee9b8f3b5d046228748ed0eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:18 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b72566f2673731037b2f685013051ddf2363a9eee9b8f3b5d046228748ed0eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:18 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b72566f2673731037b2f685013051ddf2363a9eee9b8f3b5d046228748ed0eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:18 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b72566f2673731037b2f685013051ddf2363a9eee9b8f3b5d046228748ed0eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:18 np0005558317 podman[243415]: 2025-12-13 07:32:18.199924802 +0000 UTC m=+0.081445077 container init a9b56310117cdf37207cc6838efaea3c9c98e4fc5893d925db4a33dcf7f15ceb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_buck, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 02:32:18 np0005558317 podman[243415]: 2025-12-13 07:32:18.204994111 +0000 UTC m=+0.086514386 container start a9b56310117cdf37207cc6838efaea3c9c98e4fc5893d925db4a33dcf7f15ceb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_buck, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 02:32:18 np0005558317 podman[243415]: 2025-12-13 07:32:18.206205228 +0000 UTC m=+0.087725493 container attach a9b56310117cdf37207cc6838efaea3c9c98e4fc5893d925db4a33dcf7f15ceb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_buck, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:32:18 np0005558317 podman[243415]: 2025-12-13 07:32:18.135683389 +0000 UTC m=+0.017203664 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:18 np0005558317 priceless_buck[243428]: {
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:    "0": [
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:        {
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "devices": [
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "/dev/loop3"
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            ],
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "lv_name": "ceph_lv0",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "lv_size": "21470642176",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "name": "ceph_lv0",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "tags": {
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.cluster_name": "ceph",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.crush_device_class": "",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.encrypted": "0",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.objectstore": "bluestore",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.osd_id": "0",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.type": "block",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.vdo": "0",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.with_tpm": "0"
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            },
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "type": "block",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "vg_name": "ceph_vg0"
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:        }
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:    ],
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:    "1": [
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:        {
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "devices": [
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "/dev/loop4"
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            ],
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "lv_name": "ceph_lv1",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "lv_size": "21470642176",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "name": "ceph_lv1",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "tags": {
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.cluster_name": "ceph",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.crush_device_class": "",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.encrypted": "0",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.objectstore": "bluestore",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.osd_id": "1",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.type": "block",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.vdo": "0",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.with_tpm": "0"
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            },
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "type": "block",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "vg_name": "ceph_vg1"
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:        }
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:    ],
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:    "2": [
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:        {
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "devices": [
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "/dev/loop5"
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            ],
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "lv_name": "ceph_lv2",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "lv_size": "21470642176",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "name": "ceph_lv2",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "tags": {
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.cluster_name": "ceph",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.crush_device_class": "",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.encrypted": "0",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.objectstore": "bluestore",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.osd_id": "2",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.type": "block",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.vdo": "0",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:                "ceph.with_tpm": "0"
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            },
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "type": "block",
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:            "vg_name": "ceph_vg2"
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:        }
Dec 13 02:32:18 np0005558317 priceless_buck[243428]:    ]
Dec 13 02:32:18 np0005558317 priceless_buck[243428]: }
Dec 13 02:32:18 np0005558317 systemd[1]: libpod-a9b56310117cdf37207cc6838efaea3c9c98e4fc5893d925db4a33dcf7f15ceb.scope: Deactivated successfully.
Dec 13 02:32:18 np0005558317 podman[243437]: 2025-12-13 07:32:18.475298607 +0000 UTC m=+0.017398049 container died a9b56310117cdf37207cc6838efaea3c9c98e4fc5893d925db4a33dcf7f15ceb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:32:18 np0005558317 systemd[1]: var-lib-containers-storage-overlay-6b72566f2673731037b2f685013051ddf2363a9eee9b8f3b5d046228748ed0eb-merged.mount: Deactivated successfully.
Dec 13 02:32:18 np0005558317 podman[243437]: 2025-12-13 07:32:18.496181656 +0000 UTC m=+0.038281089 container remove a9b56310117cdf37207cc6838efaea3c9c98e4fc5893d925db4a33dcf7f15ceb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_buck, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 02:32:18 np0005558317 systemd[1]: libpod-conmon-a9b56310117cdf37207cc6838efaea3c9c98e4fc5893d925db4a33dcf7f15ceb.scope: Deactivated successfully.
Dec 13 02:32:18 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:18 np0005558317 podman[243510]: 2025-12-13 07:32:18.837001571 +0000 UTC m=+0.030288663 container create c1661b5e4b9ad9f6728bbea9b27f3c9c2f7cafe246fe2e94e8034dc8485b41df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_faraday, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 02:32:18 np0005558317 systemd[1]: Started libpod-conmon-c1661b5e4b9ad9f6728bbea9b27f3c9c2f7cafe246fe2e94e8034dc8485b41df.scope.
Dec 13 02:32:18 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:32:18 np0005558317 podman[243510]: 2025-12-13 07:32:18.888846599 +0000 UTC m=+0.082133680 container init c1661b5e4b9ad9f6728bbea9b27f3c9c2f7cafe246fe2e94e8034dc8485b41df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:32:18 np0005558317 podman[243510]: 2025-12-13 07:32:18.893771497 +0000 UTC m=+0.087058579 container start c1661b5e4b9ad9f6728bbea9b27f3c9c2f7cafe246fe2e94e8034dc8485b41df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_faraday, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:32:18 np0005558317 podman[243510]: 2025-12-13 07:32:18.894905729 +0000 UTC m=+0.088192831 container attach c1661b5e4b9ad9f6728bbea9b27f3c9c2f7cafe246fe2e94e8034dc8485b41df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 02:32:18 np0005558317 adoring_faraday[243523]: 167 167
Dec 13 02:32:18 np0005558317 systemd[1]: libpod-c1661b5e4b9ad9f6728bbea9b27f3c9c2f7cafe246fe2e94e8034dc8485b41df.scope: Deactivated successfully.
Dec 13 02:32:18 np0005558317 conmon[243523]: conmon c1661b5e4b9ad9f6728b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c1661b5e4b9ad9f6728bbea9b27f3c9c2f7cafe246fe2e94e8034dc8485b41df.scope/container/memory.events
Dec 13 02:32:18 np0005558317 podman[243510]: 2025-12-13 07:32:18.897926889 +0000 UTC m=+0.091213972 container died c1661b5e4b9ad9f6728bbea9b27f3c9c2f7cafe246fe2e94e8034dc8485b41df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 02:32:18 np0005558317 podman[243510]: 2025-12-13 07:32:18.913899457 +0000 UTC m=+0.107186539 container remove c1661b5e4b9ad9f6728bbea9b27f3c9c2f7cafe246fe2e94e8034dc8485b41df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_faraday, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 02:32:18 np0005558317 podman[243510]: 2025-12-13 07:32:18.824073747 +0000 UTC m=+0.017360849 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:18 np0005558317 systemd[1]: libpod-conmon-c1661b5e4b9ad9f6728bbea9b27f3c9c2f7cafe246fe2e94e8034dc8485b41df.scope: Deactivated successfully.
Dec 13 02:32:18 np0005558317 systemd[1]: var-lib-containers-storage-overlay-e7efabf4990f8014512806d641984150a0b2523b1b161520119852171d7051f8-merged.mount: Deactivated successfully.
Dec 13 02:32:19 np0005558317 podman[243544]: 2025-12-13 07:32:19.035594481 +0000 UTC m=+0.027379605 container create a6d372d62dde1ba22ae0ddebccd8605050245e88cc51c010a1bc44e9557f0ab4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_montalcini, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:32:19 np0005558317 systemd[1]: Started libpod-conmon-a6d372d62dde1ba22ae0ddebccd8605050245e88cc51c010a1bc44e9557f0ab4.scope.
Dec 13 02:32:19 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:32:19 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7cbbe0e5282f9e05c9072323aba0aa9e9516fd35e17809307dc28b8b20c4fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:19 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7cbbe0e5282f9e05c9072323aba0aa9e9516fd35e17809307dc28b8b20c4fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:19 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7cbbe0e5282f9e05c9072323aba0aa9e9516fd35e17809307dc28b8b20c4fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:19 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7cbbe0e5282f9e05c9072323aba0aa9e9516fd35e17809307dc28b8b20c4fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:19 np0005558317 podman[243544]: 2025-12-13 07:32:19.096970066 +0000 UTC m=+0.088755199 container init a6d372d62dde1ba22ae0ddebccd8605050245e88cc51c010a1bc44e9557f0ab4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:32:19 np0005558317 podman[243544]: 2025-12-13 07:32:19.101934688 +0000 UTC m=+0.093719803 container start a6d372d62dde1ba22ae0ddebccd8605050245e88cc51c010a1bc44e9557f0ab4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_montalcini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:32:19 np0005558317 podman[243544]: 2025-12-13 07:32:19.103004981 +0000 UTC m=+0.094790095 container attach a6d372d62dde1ba22ae0ddebccd8605050245e88cc51c010a1bc44e9557f0ab4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:32:19 np0005558317 podman[243544]: 2025-12-13 07:32:19.024962603 +0000 UTC m=+0.016747727 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:19 np0005558317 lvm[243635]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:32:19 np0005558317 lvm[243634]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:32:19 np0005558317 lvm[243635]: VG ceph_vg1 finished
Dec 13 02:32:19 np0005558317 lvm[243634]: VG ceph_vg0 finished
Dec 13 02:32:19 np0005558317 lvm[243638]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:32:19 np0005558317 lvm[243638]: VG ceph_vg2 finished
Dec 13 02:32:19 np0005558317 boring_montalcini[243557]: {}
Dec 13 02:32:19 np0005558317 systemd[1]: libpod-a6d372d62dde1ba22ae0ddebccd8605050245e88cc51c010a1bc44e9557f0ab4.scope: Deactivated successfully.
Dec 13 02:32:19 np0005558317 podman[243544]: 2025-12-13 07:32:19.741887626 +0000 UTC m=+0.733672741 container died a6d372d62dde1ba22ae0ddebccd8605050245e88cc51c010a1bc44e9557f0ab4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 02:32:19 np0005558317 systemd[1]: var-lib-containers-storage-overlay-4e7cbbe0e5282f9e05c9072323aba0aa9e9516fd35e17809307dc28b8b20c4fe-merged.mount: Deactivated successfully.
Dec 13 02:32:19 np0005558317 podman[243544]: 2025-12-13 07:32:19.769617308 +0000 UTC m=+0.761402421 container remove a6d372d62dde1ba22ae0ddebccd8605050245e88cc51c010a1bc44e9557f0ab4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 02:32:19 np0005558317 podman[243640]: 2025-12-13 07:32:19.778970271 +0000 UTC m=+0.055823659 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 02:32:19 np0005558317 systemd[1]: libpod-conmon-a6d372d62dde1ba22ae0ddebccd8605050245e88cc51c010a1bc44e9557f0ab4.scope: Deactivated successfully.
Dec 13 02:32:19 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:32:19 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:32:19 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:32:19 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:32:20 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:20 np0005558317 nova_compute[241222]: 2025-12-13 07:32:20.569 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:32:20 np0005558317 nova_compute[241222]: 2025-12-13 07:32:20.588 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:32:20 np0005558317 nova_compute[241222]: 2025-12-13 07:32:20.589 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:32:20 np0005558317 nova_compute[241222]: 2025-12-13 07:32:20.589 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:32:20 np0005558317 nova_compute[241222]: 2025-12-13 07:32:20.589 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 02:32:20 np0005558317 nova_compute[241222]: 2025-12-13 07:32:20.589 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 02:32:20 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:32:20 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:32:20 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 02:32:20 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1322650243' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 02:32:20 np0005558317 nova_compute[241222]: 2025-12-13 07:32:20.995 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 02:32:21 np0005558317 nova_compute[241222]: 2025-12-13 07:32:21.199 241226 WARNING nova.virt.libvirt.driver [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 02:32:21 np0005558317 nova_compute[241222]: 2025-12-13 07:32:21.200 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5190MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 02:32:21 np0005558317 nova_compute[241222]: 2025-12-13 07:32:21.200 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:32:21 np0005558317 nova_compute[241222]: 2025-12-13 07:32:21.200 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:32:21 np0005558317 nova_compute[241222]: 2025-12-13 07:32:21.249 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 02:32:21 np0005558317 nova_compute[241222]: 2025-12-13 07:32:21.249 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 02:32:21 np0005558317 nova_compute[241222]: 2025-12-13 07:32:21.266 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 02:32:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 02:32:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/730053541' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 02:32:21 np0005558317 nova_compute[241222]: 2025-12-13 07:32:21.677 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 02:32:21 np0005558317 nova_compute[241222]: 2025-12-13 07:32:21.681 241226 DEBUG nova.compute.provider_tree [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed in ProviderTree for provider: 1d614cf3-e40f-4742-a628-7a61041be9be update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 02:32:21 np0005558317 nova_compute[241222]: 2025-12-13 07:32:21.694 241226 DEBUG nova.scheduler.client.report [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed for provider 1d614cf3-e40f-4742-a628-7a61041be9be based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 02:32:21 np0005558317 nova_compute[241222]: 2025-12-13 07:32:21.695 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 02:32:21 np0005558317 nova_compute[241222]: 2025-12-13 07:32:21.695 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.495s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:32:22 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:22 np0005558317 nova_compute[241222]: 2025-12-13 07:32:22.694 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:32:22 np0005558317 nova_compute[241222]: 2025-12-13 07:32:22.695 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:32:22 np0005558317 nova_compute[241222]: 2025-12-13 07:32:22.695 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 02:32:22 np0005558317 nova_compute[241222]: 2025-12-13 07:32:22.695 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 02:32:22 np0005558317 nova_compute[241222]: 2025-12-13 07:32:22.709 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 02:32:22 np0005558317 nova_compute[241222]: 2025-12-13 07:32:22.709 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:32:22 np0005558317 nova_compute[241222]: 2025-12-13 07:32:22.710 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:32:22 np0005558317 nova_compute[241222]: 2025-12-13 07:32:22.710 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:32:22 np0005558317 nova_compute[241222]: 2025-12-13 07:32:22.710 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:32:22 np0005558317 nova_compute[241222]: 2025-12-13 07:32:22.710 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:32:22 np0005558317 nova_compute[241222]: 2025-12-13 07:32:22.710 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 02:32:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:32:23 np0005558317 nova_compute[241222]: 2025-12-13 07:32:23.569 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:32:24 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:26 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:32:28 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:30 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:30 np0005558317 podman[243737]: 2025-12-13 07:32:30.717940906 +0000 UTC m=+0.060309982 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 02:32:32 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:32:34 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:36 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:32:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:32:38
Dec 13 02:32:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:32:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 02:32:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'images', 'default.rgw.log']
Dec 13 02:32:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:32:38 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:32:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:32:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:32:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:32:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:32:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:32:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:32:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:32:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:32:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:32:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:32:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:32:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:32:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:32:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:32:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:32:40 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:32:41.641 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:32:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:32:41.641 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:32:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:32:41.641 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:32:41 np0005558317 podman[243760]: 2025-12-13 07:32:41.697814816 +0000 UTC m=+0.038623451 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 02:32:42 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:32:44 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:46 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:32:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:32:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:32:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:32:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:32:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:32:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:32:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:32:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:32:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:32:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:32:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:32:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:32:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 02:32:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:32:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:32:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:32:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:32:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:32:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:32:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:32:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:32:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:32:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:32:48 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:50 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:50 np0005558317 podman[243777]: 2025-12-13 07:32:50.693124882 +0000 UTC m=+0.036250739 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 13 02:32:52 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.816894) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611172816913, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1489, "num_deletes": 506, "total_data_size": 1912352, "memory_usage": 1945872, "flush_reason": "Manual Compaction"}
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611172821373, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1883183, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13433, "largest_seqno": 14921, "table_properties": {"data_size": 1876640, "index_size": 3234, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 15986, "raw_average_key_size": 18, "raw_value_size": 1861643, "raw_average_value_size": 2115, "num_data_blocks": 148, "num_entries": 880, "num_filter_entries": 880, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611048, "oldest_key_time": 1765611048, "file_creation_time": 1765611172, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 4502 microseconds, and 3354 cpu microseconds.
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.821395) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1883183 bytes OK
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.821406) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.821732) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.821743) EVENT_LOG_v1 {"time_micros": 1765611172821740, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.821751) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1904735, prev total WAL file size 1904735, number of live WAL files 2.
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.822110) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1839KB)], [32(7464KB)]
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611172822129, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9527180, "oldest_snapshot_seqno": -1}
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3835 keys, 7496313 bytes, temperature: kUnknown
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611172836516, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7496313, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7468817, "index_size": 16826, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9605, "raw_key_size": 93842, "raw_average_key_size": 24, "raw_value_size": 7397485, "raw_average_value_size": 1928, "num_data_blocks": 715, "num_entries": 3835, "num_filter_entries": 3835, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610001, "oldest_key_time": 0, "file_creation_time": 1765611172, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.836610) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7496313 bytes
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.836921) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 661.2 rd, 520.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 7.3 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(9.0) write-amplify(4.0) OK, records in: 4860, records dropped: 1025 output_compression: NoCompression
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.836933) EVENT_LOG_v1 {"time_micros": 1765611172836928, "job": 14, "event": "compaction_finished", "compaction_time_micros": 14409, "compaction_time_cpu_micros": 11781, "output_level": 6, "num_output_files": 1, "total_output_size": 7496313, "num_input_records": 4860, "num_output_records": 3835, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611172837171, "job": 14, "event": "table_file_deletion", "file_number": 34}
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611172837945, "job": 14, "event": "table_file_deletion", "file_number": 32}
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.822069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.837980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.837983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.837984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.837985) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:32:52 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.837986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:32:54 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:56 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:32:58 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:00 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:01 np0005558317 podman[243794]: 2025-12-13 07:33:01.713160897 +0000 UTC m=+0.056480299 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Dec 13 02:33:02 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:33:04 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:06 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:33:08 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:33:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:33:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:33:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:33:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:33:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:33:10 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:12 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:12 np0005558317 podman[243818]: 2025-12-13 07:33:12.69257572 +0000 UTC m=+0.035035044 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec 13 02:33:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:33:13 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 02:33:13 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1946603155' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 02:33:13 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 02:33:13 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1946603155' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 02:33:14 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:16 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:33:18 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:20 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:33:20 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:33:20 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:33:20 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:33:20 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:33:20 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:33:20 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:33:20 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:33:20 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:33:20 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:33:20 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:33:20 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:33:20 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:20 np0005558317 podman[243974]: 2025-12-13 07:33:20.690773189 +0000 UTC m=+0.027963901 container create 5faf10471519b74dd01a26bc0c43f551f337e4f8920b329ac89e158cbf1d84f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 02:33:20 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:33:20 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:33:20 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:33:20 np0005558317 systemd[1]: Started libpod-conmon-5faf10471519b74dd01a26bc0c43f551f337e4f8920b329ac89e158cbf1d84f6.scope.
Dec 13 02:33:20 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:33:20 np0005558317 podman[243974]: 2025-12-13 07:33:20.734881558 +0000 UTC m=+0.072072271 container init 5faf10471519b74dd01a26bc0c43f551f337e4f8920b329ac89e158cbf1d84f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_jang, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:33:20 np0005558317 podman[243974]: 2025-12-13 07:33:20.742370606 +0000 UTC m=+0.079561320 container start 5faf10471519b74dd01a26bc0c43f551f337e4f8920b329ac89e158cbf1d84f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:33:20 np0005558317 podman[243974]: 2025-12-13 07:33:20.744550726 +0000 UTC m=+0.081741459 container attach 5faf10471519b74dd01a26bc0c43f551f337e4f8920b329ac89e158cbf1d84f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_jang, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 02:33:20 np0005558317 gracious_jang[243987]: 167 167
Dec 13 02:33:20 np0005558317 systemd[1]: libpod-5faf10471519b74dd01a26bc0c43f551f337e4f8920b329ac89e158cbf1d84f6.scope: Deactivated successfully.
Dec 13 02:33:20 np0005558317 podman[243974]: 2025-12-13 07:33:20.749289244 +0000 UTC m=+0.086479957 container died 5faf10471519b74dd01a26bc0c43f551f337e4f8920b329ac89e158cbf1d84f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_jang, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:33:20 np0005558317 systemd[1]: var-lib-containers-storage-overlay-bc7de5d9df05f978a40f3da76e8b814d79265b81435a0578c1d0ea027ab74380-merged.mount: Deactivated successfully.
Dec 13 02:33:20 np0005558317 podman[243974]: 2025-12-13 07:33:20.773642916 +0000 UTC m=+0.110833629 container remove 5faf10471519b74dd01a26bc0c43f551f337e4f8920b329ac89e158cbf1d84f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_jang, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:33:20 np0005558317 podman[243974]: 2025-12-13 07:33:20.678957015 +0000 UTC m=+0.016147747 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:20 np0005558317 systemd[1]: libpod-conmon-5faf10471519b74dd01a26bc0c43f551f337e4f8920b329ac89e158cbf1d84f6.scope: Deactivated successfully.
Dec 13 02:33:20 np0005558317 podman[243988]: 2025-12-13 07:33:20.813917108 +0000 UTC m=+0.088288216 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 02:33:20 np0005558317 podman[244025]: 2025-12-13 07:33:20.896153536 +0000 UTC m=+0.028538111 container create c12e148393487ffe5e8681deaa2c4b27c86c426aba87a7354eea52e07e8ac1d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rosalind, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:33:20 np0005558317 systemd[1]: Started libpod-conmon-c12e148393487ffe5e8681deaa2c4b27c86c426aba87a7354eea52e07e8ac1d5.scope.
Dec 13 02:33:20 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:33:20 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31f56333a3712a87ea85fca2a74bf286f3718b90475985cc0f5f4646d986af74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:20 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31f56333a3712a87ea85fca2a74bf286f3718b90475985cc0f5f4646d986af74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:20 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31f56333a3712a87ea85fca2a74bf286f3718b90475985cc0f5f4646d986af74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:20 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31f56333a3712a87ea85fca2a74bf286f3718b90475985cc0f5f4646d986af74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:20 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31f56333a3712a87ea85fca2a74bf286f3718b90475985cc0f5f4646d986af74/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:20 np0005558317 podman[244025]: 2025-12-13 07:33:20.960768025 +0000 UTC m=+0.093152610 container init c12e148393487ffe5e8681deaa2c4b27c86c426aba87a7354eea52e07e8ac1d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rosalind, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:20 np0005558317 podman[244025]: 2025-12-13 07:33:20.965870959 +0000 UTC m=+0.098255534 container start c12e148393487ffe5e8681deaa2c4b27c86c426aba87a7354eea52e07e8ac1d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rosalind, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:33:20 np0005558317 podman[244025]: 2025-12-13 07:33:20.967071085 +0000 UTC m=+0.099455680 container attach c12e148393487ffe5e8681deaa2c4b27c86c426aba87a7354eea52e07e8ac1d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 02:33:20 np0005558317 podman[244025]: 2025-12-13 07:33:20.885016229 +0000 UTC m=+0.017400814 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:21 np0005558317 elastic_rosalind[244038]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:33:21 np0005558317 elastic_rosalind[244038]: --> All data devices are unavailable
Dec 13 02:33:21 np0005558317 systemd[1]: libpod-c12e148393487ffe5e8681deaa2c4b27c86c426aba87a7354eea52e07e8ac1d5.scope: Deactivated successfully.
Dec 13 02:33:21 np0005558317 podman[244025]: 2025-12-13 07:33:21.330888911 +0000 UTC m=+0.463273486 container died c12e148393487ffe5e8681deaa2c4b27c86c426aba87a7354eea52e07e8ac1d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:33:21 np0005558317 systemd[1]: var-lib-containers-storage-overlay-31f56333a3712a87ea85fca2a74bf286f3718b90475985cc0f5f4646d986af74-merged.mount: Deactivated successfully.
Dec 13 02:33:21 np0005558317 podman[244025]: 2025-12-13 07:33:21.35334168 +0000 UTC m=+0.485726255 container remove c12e148393487ffe5e8681deaa2c4b27c86c426aba87a7354eea52e07e8ac1d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rosalind, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec 13 02:33:21 np0005558317 systemd[1]: libpod-conmon-c12e148393487ffe5e8681deaa2c4b27c86c426aba87a7354eea52e07e8ac1d5.scope: Deactivated successfully.
Dec 13 02:33:21 np0005558317 nova_compute[241222]: 2025-12-13 07:33:21.564 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:33:21 np0005558317 nova_compute[241222]: 2025-12-13 07:33:21.575 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:33:21 np0005558317 podman[244129]: 2025-12-13 07:33:21.691049232 +0000 UTC m=+0.027212158 container create c7201dd2d126ea51f19f369fd318c193db2223dfef30afb7ca5736eb1ac2cd8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_diffie, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:33:21 np0005558317 systemd[1]: Started libpod-conmon-c7201dd2d126ea51f19f369fd318c193db2223dfef30afb7ca5736eb1ac2cd8c.scope.
Dec 13 02:33:21 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:33:21 np0005558317 podman[244129]: 2025-12-13 07:33:21.751945811 +0000 UTC m=+0.088108738 container init c7201dd2d126ea51f19f369fd318c193db2223dfef30afb7ca5736eb1ac2cd8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 02:33:21 np0005558317 podman[244129]: 2025-12-13 07:33:21.756114809 +0000 UTC m=+0.092277736 container start c7201dd2d126ea51f19f369fd318c193db2223dfef30afb7ca5736eb1ac2cd8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_diffie, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 02:33:21 np0005558317 podman[244129]: 2025-12-13 07:33:21.757357726 +0000 UTC m=+0.093520652 container attach c7201dd2d126ea51f19f369fd318c193db2223dfef30afb7ca5736eb1ac2cd8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_diffie, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:33:21 np0005558317 jolly_diffie[244143]: 167 167
Dec 13 02:33:21 np0005558317 systemd[1]: libpod-c7201dd2d126ea51f19f369fd318c193db2223dfef30afb7ca5736eb1ac2cd8c.scope: Deactivated successfully.
Dec 13 02:33:21 np0005558317 podman[244129]: 2025-12-13 07:33:21.759571768 +0000 UTC m=+0.095734695 container died c7201dd2d126ea51f19f369fd318c193db2223dfef30afb7ca5736eb1ac2cd8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:33:21 np0005558317 systemd[1]: var-lib-containers-storage-overlay-8dab2912f3d697bbad7cb1b2e414622095da1f4eb65640202cf27a7333f844ef-merged.mount: Deactivated successfully.
Dec 13 02:33:21 np0005558317 podman[244129]: 2025-12-13 07:33:21.680253988 +0000 UTC m=+0.016416924 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:21 np0005558317 podman[244129]: 2025-12-13 07:33:21.778497266 +0000 UTC m=+0.114660192 container remove c7201dd2d126ea51f19f369fd318c193db2223dfef30afb7ca5736eb1ac2cd8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:33:21 np0005558317 systemd[1]: libpod-conmon-c7201dd2d126ea51f19f369fd318c193db2223dfef30afb7ca5736eb1ac2cd8c.scope: Deactivated successfully.
Dec 13 02:33:21 np0005558317 podman[244165]: 2025-12-13 07:33:21.897925581 +0000 UTC m=+0.027997112 container create 43803d7fdf18df57f308d70e8eec533e99fc28abbb0a1058cd03264717b1f87e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lehmann, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Dec 13 02:33:21 np0005558317 systemd[1]: Started libpod-conmon-43803d7fdf18df57f308d70e8eec533e99fc28abbb0a1058cd03264717b1f87e.scope.
Dec 13 02:33:21 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:33:21 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63666f72f3c8cba9b6e15299e53108db6c6d18a78e9ce6811887ee48f83be16c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:21 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63666f72f3c8cba9b6e15299e53108db6c6d18a78e9ce6811887ee48f83be16c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:21 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63666f72f3c8cba9b6e15299e53108db6c6d18a78e9ce6811887ee48f83be16c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:21 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63666f72f3c8cba9b6e15299e53108db6c6d18a78e9ce6811887ee48f83be16c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:21 np0005558317 podman[244165]: 2025-12-13 07:33:21.951788469 +0000 UTC m=+0.081859990 container init 43803d7fdf18df57f308d70e8eec533e99fc28abbb0a1058cd03264717b1f87e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 02:33:21 np0005558317 podman[244165]: 2025-12-13 07:33:21.956802715 +0000 UTC m=+0.086874236 container start 43803d7fdf18df57f308d70e8eec533e99fc28abbb0a1058cd03264717b1f87e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lehmann, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:21 np0005558317 podman[244165]: 2025-12-13 07:33:21.961458098 +0000 UTC m=+0.091529619 container attach 43803d7fdf18df57f308d70e8eec533e99fc28abbb0a1058cd03264717b1f87e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:21 np0005558317 podman[244165]: 2025-12-13 07:33:21.88682807 +0000 UTC m=+0.016899601 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]: {
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:    "0": [
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:        {
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "devices": [
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "/dev/loop3"
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            ],
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "lv_name": "ceph_lv0",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "lv_size": "21470642176",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "name": "ceph_lv0",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "tags": {
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.cluster_name": "ceph",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.crush_device_class": "",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.encrypted": "0",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.objectstore": "bluestore",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.osd_id": "0",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.type": "block",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.vdo": "0",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.with_tpm": "0"
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            },
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "type": "block",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "vg_name": "ceph_vg0"
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:        }
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:    ],
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:    "1": [
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:        {
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "devices": [
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "/dev/loop4"
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            ],
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "lv_name": "ceph_lv1",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "lv_size": "21470642176",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "name": "ceph_lv1",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "tags": {
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.cluster_name": "ceph",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.crush_device_class": "",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.encrypted": "0",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.objectstore": "bluestore",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.osd_id": "1",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.type": "block",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.vdo": "0",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.with_tpm": "0"
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            },
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "type": "block",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "vg_name": "ceph_vg1"
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:        }
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:    ],
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:    "2": [
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:        {
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "devices": [
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "/dev/loop5"
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            ],
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "lv_name": "ceph_lv2",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "lv_size": "21470642176",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "name": "ceph_lv2",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "tags": {
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.cluster_name": "ceph",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.crush_device_class": "",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.encrypted": "0",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.objectstore": "bluestore",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.osd_id": "2",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.type": "block",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.vdo": "0",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:                "ceph.with_tpm": "0"
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            },
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "type": "block",
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:            "vg_name": "ceph_vg2"
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:        }
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]:    ]
Dec 13 02:33:22 np0005558317 vigilant_lehmann[244178]: }
Dec 13 02:33:22 np0005558317 systemd[1]: libpod-43803d7fdf18df57f308d70e8eec533e99fc28abbb0a1058cd03264717b1f87e.scope: Deactivated successfully.
Dec 13 02:33:22 np0005558317 podman[244165]: 2025-12-13 07:33:22.190367818 +0000 UTC m=+0.320439349 container died 43803d7fdf18df57f308d70e8eec533e99fc28abbb0a1058cd03264717b1f87e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lehmann, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 02:33:22 np0005558317 systemd[1]: var-lib-containers-storage-overlay-63666f72f3c8cba9b6e15299e53108db6c6d18a78e9ce6811887ee48f83be16c-merged.mount: Deactivated successfully.
Dec 13 02:33:22 np0005558317 podman[244165]: 2025-12-13 07:33:22.213145879 +0000 UTC m=+0.343217399 container remove 43803d7fdf18df57f308d70e8eec533e99fc28abbb0a1058cd03264717b1f87e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 02:33:22 np0005558317 systemd[1]: libpod-conmon-43803d7fdf18df57f308d70e8eec533e99fc28abbb0a1058cd03264717b1f87e.scope: Deactivated successfully.
Dec 13 02:33:22 np0005558317 podman[244257]: 2025-12-13 07:33:22.547424555 +0000 UTC m=+0.028244499 container create 7664264d0496504f9b23ae92bfe8b561f4d8f13b90b70c9dd6158cbe3d94239d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_keldysh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:33:22 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:22 np0005558317 nova_compute[241222]: 2025-12-13 07:33:22.567 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:33:22 np0005558317 nova_compute[241222]: 2025-12-13 07:33:22.568 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:33:22 np0005558317 nova_compute[241222]: 2025-12-13 07:33:22.568 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:33:22 np0005558317 nova_compute[241222]: 2025-12-13 07:33:22.568 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 02:33:22 np0005558317 nova_compute[241222]: 2025-12-13 07:33:22.568 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:33:22 np0005558317 systemd[1]: Started libpod-conmon-7664264d0496504f9b23ae92bfe8b561f4d8f13b90b70c9dd6158cbe3d94239d.scope.
Dec 13 02:33:22 np0005558317 nova_compute[241222]: 2025-12-13 07:33:22.585 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:33:22 np0005558317 nova_compute[241222]: 2025-12-13 07:33:22.585 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:33:22 np0005558317 nova_compute[241222]: 2025-12-13 07:33:22.585 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:33:22 np0005558317 nova_compute[241222]: 2025-12-13 07:33:22.585 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 02:33:22 np0005558317 nova_compute[241222]: 2025-12-13 07:33:22.586 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 02:33:22 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:33:22 np0005558317 podman[244257]: 2025-12-13 07:33:22.602380327 +0000 UTC m=+0.083200290 container init 7664264d0496504f9b23ae92bfe8b561f4d8f13b90b70c9dd6158cbe3d94239d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_keldysh, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 02:33:22 np0005558317 podman[244257]: 2025-12-13 07:33:22.60747809 +0000 UTC m=+0.088298034 container start 7664264d0496504f9b23ae92bfe8b561f4d8f13b90b70c9dd6158cbe3d94239d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:33:22 np0005558317 wonderful_keldysh[244271]: 167 167
Dec 13 02:33:22 np0005558317 podman[244257]: 2025-12-13 07:33:22.609705858 +0000 UTC m=+0.090525822 container attach 7664264d0496504f9b23ae92bfe8b561f4d8f13b90b70c9dd6158cbe3d94239d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_keldysh, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 02:33:22 np0005558317 systemd[1]: libpod-7664264d0496504f9b23ae92bfe8b561f4d8f13b90b70c9dd6158cbe3d94239d.scope: Deactivated successfully.
Dec 13 02:33:22 np0005558317 podman[244257]: 2025-12-13 07:33:22.610354116 +0000 UTC m=+0.091174071 container died 7664264d0496504f9b23ae92bfe8b561f4d8f13b90b70c9dd6158cbe3d94239d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_keldysh, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Dec 13 02:33:22 np0005558317 podman[244257]: 2025-12-13 07:33:22.535901612 +0000 UTC m=+0.016721576 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:22 np0005558317 podman[244257]: 2025-12-13 07:33:22.639280215 +0000 UTC m=+0.120100159 container remove 7664264d0496504f9b23ae92bfe8b561f4d8f13b90b70c9dd6158cbe3d94239d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_keldysh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 02:33:22 np0005558317 systemd[1]: libpod-conmon-7664264d0496504f9b23ae92bfe8b561f4d8f13b90b70c9dd6158cbe3d94239d.scope: Deactivated successfully.
Dec 13 02:33:22 np0005558317 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:33:22 np0005558317 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3360 writes, 15K keys, 3360 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3360 writes, 3360 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1302 writes, 5911 keys, 1302 commit groups, 1.0 writes per commit group, ingest: 8.65 MB, 0.01 MB/s#012Interval WAL: 1302 writes, 1302 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    385.8      0.04              0.03         7    0.006       0      0       0.0       0.0#012  L6      1/0    7.15 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.6    549.9    450.6      0.09              0.08         6    0.016     24K   3209       0.0       0.0#012 Sum      1/0    7.15 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.6    377.9    430.3      0.14              0.11        13    0.010     24K   3209       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6    431.3    438.3      0.08              0.07         8    0.010     17K   2478       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0    549.9    450.6      0.09              0.08         6    0.016     24K   3209       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    394.7      0.04              0.03         6    0.007       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     42.7      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.016, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.1 seconds#012Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5642ba289a30#2 capacity: 308.00 MB usage: 1.91 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(106,1.69 MB,0.547617%) FilterBlock(14,75.67 KB,0.023993%) IndexBlock(14,149.52 KB,0.0474063%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec 13 02:33:22 np0005558317 systemd[1]: var-lib-containers-storage-overlay-5bda0724c9686ef73bb7e6d5f4f67342c4482b1a743aca131fa9a603aad22301-merged.mount: Deactivated successfully.
Dec 13 02:33:22 np0005558317 podman[244312]: 2025-12-13 07:33:22.763983426 +0000 UTC m=+0.028941840 container create a4bb956b1cc4cab743bade0d9278bb783c0e7a56446bc6d1f0305aea37a49675 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:33:22 np0005558317 systemd[1]: Started libpod-conmon-a4bb956b1cc4cab743bade0d9278bb783c0e7a56446bc6d1f0305aea37a49675.scope.
Dec 13 02:33:22 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:33:22 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bddb1f08094305f4436a2b4b76a69eae023e5b95d208eea55233b64cd697f4cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:22 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bddb1f08094305f4436a2b4b76a69eae023e5b95d208eea55233b64cd697f4cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:22 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bddb1f08094305f4436a2b4b76a69eae023e5b95d208eea55233b64cd697f4cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:22 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bddb1f08094305f4436a2b4b76a69eae023e5b95d208eea55233b64cd697f4cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:33:22 np0005558317 podman[244312]: 2025-12-13 07:33:22.819559965 +0000 UTC m=+0.084518399 container init a4bb956b1cc4cab743bade0d9278bb783c0e7a56446bc6d1f0305aea37a49675 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 02:33:22 np0005558317 podman[244312]: 2025-12-13 07:33:22.82403604 +0000 UTC m=+0.088994444 container start a4bb956b1cc4cab743bade0d9278bb783c0e7a56446bc6d1f0305aea37a49675 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_rosalind, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:33:22 np0005558317 podman[244312]: 2025-12-13 07:33:22.825537032 +0000 UTC m=+0.090495457 container attach a4bb956b1cc4cab743bade0d9278bb783c0e7a56446bc6d1f0305aea37a49675 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_rosalind, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 02:33:22 np0005558317 podman[244312]: 2025-12-13 07:33:22.751624774 +0000 UTC m=+0.016583198 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:23 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 02:33:23 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2396368659' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 02:33:23 np0005558317 nova_compute[241222]: 2025-12-13 07:33:23.019 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 02:33:23 np0005558317 nova_compute[241222]: 2025-12-13 07:33:23.289 241226 WARNING nova.virt.libvirt.driver [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 02:33:23 np0005558317 nova_compute[241222]: 2025-12-13 07:33:23.289 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5142MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 02:33:23 np0005558317 nova_compute[241222]: 2025-12-13 07:33:23.290 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:33:23 np0005558317 nova_compute[241222]: 2025-12-13 07:33:23.290 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:33:23 np0005558317 nova_compute[241222]: 2025-12-13 07:33:23.331 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 02:33:23 np0005558317 nova_compute[241222]: 2025-12-13 07:33:23.331 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 02:33:23 np0005558317 nova_compute[241222]: 2025-12-13 07:33:23.343 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 02:33:23 np0005558317 lvm[244428]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:33:23 np0005558317 lvm[244429]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:33:23 np0005558317 lvm[244429]: VG ceph_vg2 finished
Dec 13 02:33:23 np0005558317 lvm[244428]: VG ceph_vg1 finished
Dec 13 02:33:23 np0005558317 lvm[244426]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:33:23 np0005558317 lvm[244426]: VG ceph_vg0 finished
Dec 13 02:33:23 np0005558317 nostalgic_rosalind[244326]: {}
Dec 13 02:33:23 np0005558317 systemd[1]: libpod-a4bb956b1cc4cab743bade0d9278bb783c0e7a56446bc6d1f0305aea37a49675.scope: Deactivated successfully.
Dec 13 02:33:23 np0005558317 podman[244312]: 2025-12-13 07:33:23.516260203 +0000 UTC m=+0.781218617 container died a4bb956b1cc4cab743bade0d9278bb783c0e7a56446bc6d1f0305aea37a49675 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Dec 13 02:33:23 np0005558317 systemd[1]: var-lib-containers-storage-overlay-bddb1f08094305f4436a2b4b76a69eae023e5b95d208eea55233b64cd697f4cb-merged.mount: Deactivated successfully.
Dec 13 02:33:23 np0005558317 podman[244312]: 2025-12-13 07:33:23.53791433 +0000 UTC m=+0.802872734 container remove a4bb956b1cc4cab743bade0d9278bb783c0e7a56446bc6d1f0305aea37a49675 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Dec 13 02:33:23 np0005558317 systemd[1]: libpod-conmon-a4bb956b1cc4cab743bade0d9278bb783c0e7a56446bc6d1f0305aea37a49675.scope: Deactivated successfully.
Dec 13 02:33:23 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:33:23 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:33:23 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:33:23 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:33:23 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:33:23 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:33:23 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 02:33:23 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/546582947' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 02:33:23 np0005558317 nova_compute[241222]: 2025-12-13 07:33:23.787 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 02:33:23 np0005558317 nova_compute[241222]: 2025-12-13 07:33:23.791 241226 DEBUG nova.compute.provider_tree [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed in ProviderTree for provider: 1d614cf3-e40f-4742-a628-7a61041be9be update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 02:33:23 np0005558317 nova_compute[241222]: 2025-12-13 07:33:23.801 241226 DEBUG nova.scheduler.client.report [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed for provider 1d614cf3-e40f-4742-a628-7a61041be9be based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 02:33:23 np0005558317 nova_compute[241222]: 2025-12-13 07:33:23.802 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 02:33:23 np0005558317 nova_compute[241222]: 2025-12-13 07:33:23.803 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:33:24 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:24 np0005558317 nova_compute[241222]: 2025-12-13 07:33:24.802 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:33:24 np0005558317 nova_compute[241222]: 2025-12-13 07:33:24.803 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 02:33:24 np0005558317 nova_compute[241222]: 2025-12-13 07:33:24.803 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 02:33:24 np0005558317 nova_compute[241222]: 2025-12-13 07:33:24.813 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 02:33:24 np0005558317 nova_compute[241222]: 2025-12-13 07:33:24.813 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:33:24 np0005558317 nova_compute[241222]: 2025-12-13 07:33:24.813 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:33:24 np0005558317 nova_compute[241222]: 2025-12-13 07:33:24.813 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:33:26 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:33:28 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:30 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:32 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:32 np0005558317 podman[244468]: 2025-12-13 07:33:32.724527229 +0000 UTC m=+0.064825387 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 13 02:33:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:33:34 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:36 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:33:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:33:38
Dec 13 02:33:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:33:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 02:33:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'images', 'default.rgw.log', 'volumes', 'vms', '.mgr']
Dec 13 02:33:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:33:38 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:33:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:33:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:33:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:33:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:33:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:33:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:33:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:33:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:33:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:33:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:33:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:33:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:33:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:33:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:33:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:33:40 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:33:41.642 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:33:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:33:41.643 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:33:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:33:41.643 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:33:42 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:33:43 np0005558317 podman[244492]: 2025-12-13 07:33:43.706063429 +0000 UTC m=+0.043423313 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 02:33:44 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:46 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:33:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:33:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:33:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:33:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:33:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:33:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:33:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:33:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:33:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:33:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:33:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:33:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:33:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 02:33:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:33:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:33:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:33:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:33:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:33:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:33:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:33:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:33:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:33:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:33:48 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:50 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:51 np0005558317 podman[244509]: 2025-12-13 07:33:51.695271177 +0000 UTC m=+0.037228307 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 02:33:52 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:33:54 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:56 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:33:58 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:00 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:02 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:34:03 np0005558317 podman[244525]: 2025-12-13 07:34:03.711606322 +0000 UTC m=+0.055514814 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec 13 02:34:04 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:06 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:34:08 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:34:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:34:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:34:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:34:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:34:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:34:10 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:12 np0005558317 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:34:12 np0005558317 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 5739 writes, 24K keys, 5739 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5739 writes, 979 syncs, 5.86 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s#012Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec 13 02:34:12 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:34:13 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 02:34:13 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1959695089' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 02:34:13 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 02:34:13 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1959695089' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 02:34:14 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:14 np0005558317 podman[244549]: 2025-12-13 07:34:14.705163562 +0000 UTC m=+0.041869932 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 13 02:34:15 np0005558317 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:34:15 np0005558317 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 7219 writes, 28K keys, 7219 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7219 writes, 1518 syncs, 4.76 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Dec 13 02:34:16 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:34:18 np0005558317 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:34:18 np0005558317 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5817 writes, 24K keys, 5817 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5817 writes, 955 syncs, 6.09 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 1.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 1.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec 13 02:34:18 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:19 np0005558317 nova_compute[241222]: 2025-12-13 07:34:19.569 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:34:19 np0005558317 nova_compute[241222]: 2025-12-13 07:34:19.569 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 13 02:34:19 np0005558317 nova_compute[241222]: 2025-12-13 07:34:19.581 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 13 02:34:19 np0005558317 nova_compute[241222]: 2025-12-13 07:34:19.581 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:34:19 np0005558317 nova_compute[241222]: 2025-12-13 07:34:19.581 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 13 02:34:19 np0005558317 nova_compute[241222]: 2025-12-13 07:34:19.588 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:34:20 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:22 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:22 np0005558317 nova_compute[241222]: 2025-12-13 07:34:22.601 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:34:22 np0005558317 nova_compute[241222]: 2025-12-13 07:34:22.618 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:34:22 np0005558317 nova_compute[241222]: 2025-12-13 07:34:22.618 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:34:22 np0005558317 nova_compute[241222]: 2025-12-13 07:34:22.619 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:34:22 np0005558317 nova_compute[241222]: 2025-12-13 07:34:22.619 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 02:34:22 np0005558317 nova_compute[241222]: 2025-12-13 07:34:22.619 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 02:34:22 np0005558317 podman[244568]: 2025-12-13 07:34:22.690911516 +0000 UTC m=+0.035158867 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 02:34:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:34:23 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 02:34:23 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3531130214' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 02:34:23 np0005558317 nova_compute[241222]: 2025-12-13 07:34:23.026 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 02:34:23 np0005558317 nova_compute[241222]: 2025-12-13 07:34:23.221 241226 WARNING nova.virt.libvirt.driver [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 02:34:23 np0005558317 nova_compute[241222]: 2025-12-13 07:34:23.222 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5172MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 02:34:23 np0005558317 nova_compute[241222]: 2025-12-13 07:34:23.222 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:34:23 np0005558317 nova_compute[241222]: 2025-12-13 07:34:23.223 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:34:23 np0005558317 ceph-mgr[75200]: [devicehealth INFO root] Check health
Dec 13 02:34:23 np0005558317 nova_compute[241222]: 2025-12-13 07:34:23.402 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 02:34:23 np0005558317 nova_compute[241222]: 2025-12-13 07:34:23.402 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 02:34:23 np0005558317 nova_compute[241222]: 2025-12-13 07:34:23.456 241226 DEBUG nova.scheduler.client.report [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Refreshing inventories for resource provider 1d614cf3-e40f-4742-a628-7a61041be9be _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 13 02:34:23 np0005558317 nova_compute[241222]: 2025-12-13 07:34:23.511 241226 DEBUG nova.scheduler.client.report [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Updating ProviderTree inventory for provider 1d614cf3-e40f-4742-a628-7a61041be9be from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 13 02:34:23 np0005558317 nova_compute[241222]: 2025-12-13 07:34:23.511 241226 DEBUG nova.compute.provider_tree [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Updating inventory in ProviderTree for provider 1d614cf3-e40f-4742-a628-7a61041be9be with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 02:34:23 np0005558317 nova_compute[241222]: 2025-12-13 07:34:23.522 241226 DEBUG nova.scheduler.client.report [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Refreshing aggregate associations for resource provider 1d614cf3-e40f-4742-a628-7a61041be9be, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 13 02:34:23 np0005558317 nova_compute[241222]: 2025-12-13 07:34:23.539 241226 DEBUG nova.scheduler.client.report [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Refreshing trait associations for resource provider 1d614cf3-e40f-4742-a628-7a61041be9be, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,HW_CPU_X86_F16C,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX512VPCLMULQDQ,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX512VAES,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI2,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 13 02:34:23 np0005558317 nova_compute[241222]: 2025-12-13 07:34:23.548 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 02:34:23 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 02:34:23 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3445481919' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 02:34:23 np0005558317 nova_compute[241222]: 2025-12-13 07:34:23.956 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 02:34:23 np0005558317 nova_compute[241222]: 2025-12-13 07:34:23.960 241226 DEBUG nova.compute.provider_tree [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed in ProviderTree for provider: 1d614cf3-e40f-4742-a628-7a61041be9be update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 02:34:23 np0005558317 nova_compute[241222]: 2025-12-13 07:34:23.971 241226 DEBUG nova.scheduler.client.report [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed for provider 1d614cf3-e40f-4742-a628-7a61041be9be based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 02:34:23 np0005558317 nova_compute[241222]: 2025-12-13 07:34:23.972 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 02:34:23 np0005558317 nova_compute[241222]: 2025-12-13 07:34:23.973 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:34:24 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:34:24 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:34:24 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:34:24 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:34:24 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:34:24 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:34:24 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:34:24 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:34:24 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:34:24 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:34:24 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:34:24 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:34:24 np0005558317 podman[244766]: 2025-12-13 07:34:24.424850062 +0000 UTC m=+0.027673985 container create 2d153722389bf0223b48a191e3e283d7682d783b8023b977cb0375116634e0bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 02:34:24 np0005558317 systemd[1]: Started libpod-conmon-2d153722389bf0223b48a191e3e283d7682d783b8023b977cb0375116634e0bf.scope.
Dec 13 02:34:24 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:34:24 np0005558317 podman[244766]: 2025-12-13 07:34:24.47721322 +0000 UTC m=+0.080037163 container init 2d153722389bf0223b48a191e3e283d7682d783b8023b977cb0375116634e0bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_pasteur, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:34:24 np0005558317 podman[244766]: 2025-12-13 07:34:24.481971656 +0000 UTC m=+0.084795579 container start 2d153722389bf0223b48a191e3e283d7682d783b8023b977cb0375116634e0bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_pasteur, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 02:34:24 np0005558317 podman[244766]: 2025-12-13 07:34:24.484296617 +0000 UTC m=+0.087120540 container attach 2d153722389bf0223b48a191e3e283d7682d783b8023b977cb0375116634e0bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:34:24 np0005558317 awesome_pasteur[244779]: 167 167
Dec 13 02:34:24 np0005558317 systemd[1]: libpod-2d153722389bf0223b48a191e3e283d7682d783b8023b977cb0375116634e0bf.scope: Deactivated successfully.
Dec 13 02:34:24 np0005558317 podman[244766]: 2025-12-13 07:34:24.48537304 +0000 UTC m=+0.088196963 container died 2d153722389bf0223b48a191e3e283d7682d783b8023b977cb0375116634e0bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:34:24 np0005558317 systemd[1]: var-lib-containers-storage-overlay-8fa5e36ba187816d186afdf5d596b78c1aa5ee69105b38b083eb0d79f5700567-merged.mount: Deactivated successfully.
Dec 13 02:34:24 np0005558317 podman[244766]: 2025-12-13 07:34:24.504295713 +0000 UTC m=+0.107119636 container remove 2d153722389bf0223b48a191e3e283d7682d783b8023b977cb0375116634e0bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_pasteur, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:34:24 np0005558317 podman[244766]: 2025-12-13 07:34:24.413658574 +0000 UTC m=+0.016482496 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:34:24 np0005558317 systemd[1]: libpod-conmon-2d153722389bf0223b48a191e3e283d7682d783b8023b977cb0375116634e0bf.scope: Deactivated successfully.
Dec 13 02:34:24 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:24 np0005558317 podman[244802]: 2025-12-13 07:34:24.623415128 +0000 UTC m=+0.027842272 container create bcc18f1e90f4769ce1562424eaeec1f120f42de5ae950ce32fd553ba70e5abd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:34:24 np0005558317 systemd[1]: Started libpod-conmon-bcc18f1e90f4769ce1562424eaeec1f120f42de5ae950ce32fd553ba70e5abd0.scope.
Dec 13 02:34:24 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:34:24 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dd04ace1e8e5a1b68c87e9408816db5775d416061c985c764cf6fbcebb87d69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:24 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:34:24 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:34:24 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:34:24 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dd04ace1e8e5a1b68c87e9408816db5775d416061c985c764cf6fbcebb87d69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:24 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dd04ace1e8e5a1b68c87e9408816db5775d416061c985c764cf6fbcebb87d69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:24 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dd04ace1e8e5a1b68c87e9408816db5775d416061c985c764cf6fbcebb87d69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:24 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dd04ace1e8e5a1b68c87e9408816db5775d416061c985c764cf6fbcebb87d69/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:24 np0005558317 podman[244802]: 2025-12-13 07:34:24.685704657 +0000 UTC m=+0.090131821 container init bcc18f1e90f4769ce1562424eaeec1f120f42de5ae950ce32fd553ba70e5abd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_jang, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:34:24 np0005558317 podman[244802]: 2025-12-13 07:34:24.691890976 +0000 UTC m=+0.096318121 container start bcc18f1e90f4769ce1562424eaeec1f120f42de5ae950ce32fd553ba70e5abd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_jang, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:34:24 np0005558317 podman[244802]: 2025-12-13 07:34:24.693213123 +0000 UTC m=+0.097640267 container attach bcc18f1e90f4769ce1562424eaeec1f120f42de5ae950ce32fd553ba70e5abd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_jang, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 02:34:24 np0005558317 podman[244802]: 2025-12-13 07:34:24.611694093 +0000 UTC m=+0.016121258 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:34:24 np0005558317 nova_compute[241222]: 2025-12-13 07:34:24.934 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:34:24 np0005558317 nova_compute[241222]: 2025-12-13 07:34:24.936 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:34:24 np0005558317 nova_compute[241222]: 2025-12-13 07:34:24.937 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:34:24 np0005558317 nova_compute[241222]: 2025-12-13 07:34:24.937 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:34:24 np0005558317 nova_compute[241222]: 2025-12-13 07:34:24.937 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:34:24 np0005558317 nova_compute[241222]: 2025-12-13 07:34:24.937 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 02:34:25 np0005558317 trusting_jang[244817]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:34:25 np0005558317 trusting_jang[244817]: --> All data devices are unavailable
Dec 13 02:34:25 np0005558317 systemd[1]: libpod-bcc18f1e90f4769ce1562424eaeec1f120f42de5ae950ce32fd553ba70e5abd0.scope: Deactivated successfully.
Dec 13 02:34:25 np0005558317 conmon[244817]: conmon bcc18f1e90f4769ce156 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bcc18f1e90f4769ce1562424eaeec1f120f42de5ae950ce32fd553ba70e5abd0.scope/container/memory.events
Dec 13 02:34:25 np0005558317 podman[244837]: 2025-12-13 07:34:25.082190616 +0000 UTC m=+0.017422635 container died bcc18f1e90f4769ce1562424eaeec1f120f42de5ae950ce32fd553ba70e5abd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_jang, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 02:34:25 np0005558317 systemd[1]: var-lib-containers-storage-overlay-6dd04ace1e8e5a1b68c87e9408816db5775d416061c985c764cf6fbcebb87d69-merged.mount: Deactivated successfully.
Dec 13 02:34:25 np0005558317 podman[244837]: 2025-12-13 07:34:25.100997031 +0000 UTC m=+0.036229049 container remove bcc18f1e90f4769ce1562424eaeec1f120f42de5ae950ce32fd553ba70e5abd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_jang, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:34:25 np0005558317 systemd[1]: libpod-conmon-bcc18f1e90f4769ce1562424eaeec1f120f42de5ae950ce32fd553ba70e5abd0.scope: Deactivated successfully.
Dec 13 02:34:25 np0005558317 podman[244909]: 2025-12-13 07:34:25.441364953 +0000 UTC m=+0.028291056 container create 4f466d6ebc043651a1faabe893dbe3dc399d601d0815d5187779ac229a695844 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 02:34:25 np0005558317 systemd[1]: Started libpod-conmon-4f466d6ebc043651a1faabe893dbe3dc399d601d0815d5187779ac229a695844.scope.
Dec 13 02:34:25 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:34:25 np0005558317 podman[244909]: 2025-12-13 07:34:25.483374327 +0000 UTC m=+0.070300439 container init 4f466d6ebc043651a1faabe893dbe3dc399d601d0815d5187779ac229a695844 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:34:25 np0005558317 podman[244909]: 2025-12-13 07:34:25.487537284 +0000 UTC m=+0.074463396 container start 4f466d6ebc043651a1faabe893dbe3dc399d601d0815d5187779ac229a695844 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:34:25 np0005558317 podman[244909]: 2025-12-13 07:34:25.488923059 +0000 UTC m=+0.075849181 container attach 4f466d6ebc043651a1faabe893dbe3dc399d601d0815d5187779ac229a695844 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_sanderson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 02:34:25 np0005558317 wonderful_sanderson[244922]: 167 167
Dec 13 02:34:25 np0005558317 systemd[1]: libpod-4f466d6ebc043651a1faabe893dbe3dc399d601d0815d5187779ac229a695844.scope: Deactivated successfully.
Dec 13 02:34:25 np0005558317 podman[244909]: 2025-12-13 07:34:25.491810547 +0000 UTC m=+0.078736650 container died 4f466d6ebc043651a1faabe893dbe3dc399d601d0815d5187779ac229a695844 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_sanderson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Dec 13 02:34:25 np0005558317 systemd[1]: var-lib-containers-storage-overlay-dc6285dfad4b1cf53382164df9e54428a095a35eb6646eb51f0af2aba03cd9f0-merged.mount: Deactivated successfully.
Dec 13 02:34:25 np0005558317 podman[244909]: 2025-12-13 07:34:25.508476126 +0000 UTC m=+0.095402229 container remove 4f466d6ebc043651a1faabe893dbe3dc399d601d0815d5187779ac229a695844 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_sanderson, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 02:34:25 np0005558317 podman[244909]: 2025-12-13 07:34:25.429223308 +0000 UTC m=+0.016149420 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:34:25 np0005558317 systemd[1]: libpod-conmon-4f466d6ebc043651a1faabe893dbe3dc399d601d0815d5187779ac229a695844.scope: Deactivated successfully.
Dec 13 02:34:25 np0005558317 nova_compute[241222]: 2025-12-13 07:34:25.569 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:34:25 np0005558317 nova_compute[241222]: 2025-12-13 07:34:25.570 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 02:34:25 np0005558317 nova_compute[241222]: 2025-12-13 07:34:25.570 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 02:34:25 np0005558317 nova_compute[241222]: 2025-12-13 07:34:25.634 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 02:34:25 np0005558317 nova_compute[241222]: 2025-12-13 07:34:25.634 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:34:25 np0005558317 podman[244944]: 2025-12-13 07:34:25.647516541 +0000 UTC m=+0.031436098 container create 8f46244380dad1bb10d9d80e92dc62c9b6369a151fef787843d7ad1afc5e1a16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_proskuriakova, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:34:25 np0005558317 systemd[1]: Started libpod-conmon-8f46244380dad1bb10d9d80e92dc62c9b6369a151fef787843d7ad1afc5e1a16.scope.
Dec 13 02:34:25 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:34:25 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/635c276503117553931aa2438669f780896a1c5e233d7c9b9e051ac78f536a61/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:25 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/635c276503117553931aa2438669f780896a1c5e233d7c9b9e051ac78f536a61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:25 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/635c276503117553931aa2438669f780896a1c5e233d7c9b9e051ac78f536a61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:25 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/635c276503117553931aa2438669f780896a1c5e233d7c9b9e051ac78f536a61/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:25 np0005558317 podman[244944]: 2025-12-13 07:34:25.702280021 +0000 UTC m=+0.086199589 container init 8f46244380dad1bb10d9d80e92dc62c9b6369a151fef787843d7ad1afc5e1a16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_proskuriakova, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:34:25 np0005558317 podman[244944]: 2025-12-13 07:34:25.707052343 +0000 UTC m=+0.090971890 container start 8f46244380dad1bb10d9d80e92dc62c9b6369a151fef787843d7ad1afc5e1a16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_proskuriakova, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:34:25 np0005558317 podman[244944]: 2025-12-13 07:34:25.708300961 +0000 UTC m=+0.092220508 container attach 8f46244380dad1bb10d9d80e92dc62c9b6369a151fef787843d7ad1afc5e1a16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Dec 13 02:34:25 np0005558317 podman[244944]: 2025-12-13 07:34:25.634901015 +0000 UTC m=+0.018820582 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]: {
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:    "0": [
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:        {
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "devices": [
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "/dev/loop3"
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            ],
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "lv_name": "ceph_lv0",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "lv_size": "21470642176",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "name": "ceph_lv0",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "tags": {
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.cluster_name": "ceph",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.crush_device_class": "",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.encrypted": "0",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.objectstore": "bluestore",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.osd_id": "0",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.type": "block",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.vdo": "0",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.with_tpm": "0"
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            },
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "type": "block",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "vg_name": "ceph_vg0"
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:        }
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:    ],
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:    "1": [
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:        {
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "devices": [
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "/dev/loop4"
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            ],
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "lv_name": "ceph_lv1",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "lv_size": "21470642176",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "name": "ceph_lv1",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "tags": {
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.cluster_name": "ceph",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.crush_device_class": "",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.encrypted": "0",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.objectstore": "bluestore",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.osd_id": "1",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.type": "block",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.vdo": "0",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.with_tpm": "0"
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            },
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "type": "block",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "vg_name": "ceph_vg1"
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:        }
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:    ],
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:    "2": [
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:        {
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "devices": [
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "/dev/loop5"
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            ],
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "lv_name": "ceph_lv2",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "lv_size": "21470642176",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "name": "ceph_lv2",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "tags": {
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.cluster_name": "ceph",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.crush_device_class": "",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.encrypted": "0",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.objectstore": "bluestore",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.osd_id": "2",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.type": "block",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.vdo": "0",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:                "ceph.with_tpm": "0"
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            },
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "type": "block",
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:            "vg_name": "ceph_vg2"
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:        }
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]:    ]
Dec 13 02:34:25 np0005558317 clever_proskuriakova[244958]: }
Dec 13 02:34:25 np0005558317 systemd[1]: libpod-8f46244380dad1bb10d9d80e92dc62c9b6369a151fef787843d7ad1afc5e1a16.scope: Deactivated successfully.
Dec 13 02:34:25 np0005558317 podman[244944]: 2025-12-13 07:34:25.935204379 +0000 UTC m=+0.319123936 container died 8f46244380dad1bb10d9d80e92dc62c9b6369a151fef787843d7ad1afc5e1a16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_proskuriakova, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:34:25 np0005558317 systemd[1]: var-lib-containers-storage-overlay-635c276503117553931aa2438669f780896a1c5e233d7c9b9e051ac78f536a61-merged.mount: Deactivated successfully.
Dec 13 02:34:25 np0005558317 podman[244944]: 2025-12-13 07:34:25.956979154 +0000 UTC m=+0.340898701 container remove 8f46244380dad1bb10d9d80e92dc62c9b6369a151fef787843d7ad1afc5e1a16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_proskuriakova, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 02:34:25 np0005558317 systemd[1]: libpod-conmon-8f46244380dad1bb10d9d80e92dc62c9b6369a151fef787843d7ad1afc5e1a16.scope: Deactivated successfully.
Dec 13 02:34:26 np0005558317 podman[245036]: 2025-12-13 07:34:26.291265554 +0000 UTC m=+0.027519185 container create 6315847e8a2701a2918d3fcbbd01a4414fdf5f60928ce47ed435f322aaae6677 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:34:26 np0005558317 systemd[1]: Started libpod-conmon-6315847e8a2701a2918d3fcbbd01a4414fdf5f60928ce47ed435f322aaae6677.scope.
Dec 13 02:34:26 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:34:26 np0005558317 podman[245036]: 2025-12-13 07:34:26.343390013 +0000 UTC m=+0.079643665 container init 6315847e8a2701a2918d3fcbbd01a4414fdf5f60928ce47ed435f322aaae6677 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:34:26 np0005558317 podman[245036]: 2025-12-13 07:34:26.348066665 +0000 UTC m=+0.084320297 container start 6315847e8a2701a2918d3fcbbd01a4414fdf5f60928ce47ed435f322aaae6677 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:34:26 np0005558317 podman[245036]: 2025-12-13 07:34:26.349340761 +0000 UTC m=+0.085594392 container attach 6315847e8a2701a2918d3fcbbd01a4414fdf5f60928ce47ed435f322aaae6677 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 02:34:26 np0005558317 dreamy_yonath[245049]: 167 167
Dec 13 02:34:26 np0005558317 systemd[1]: libpod-6315847e8a2701a2918d3fcbbd01a4414fdf5f60928ce47ed435f322aaae6677.scope: Deactivated successfully.
Dec 13 02:34:26 np0005558317 podman[245036]: 2025-12-13 07:34:26.35157504 +0000 UTC m=+0.087828671 container died 6315847e8a2701a2918d3fcbbd01a4414fdf5f60928ce47ed435f322aaae6677 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_yonath, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 02:34:26 np0005558317 podman[245036]: 2025-12-13 07:34:26.36924608 +0000 UTC m=+0.105499712 container remove 6315847e8a2701a2918d3fcbbd01a4414fdf5f60928ce47ed435f322aaae6677 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_yonath, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:34:26 np0005558317 podman[245036]: 2025-12-13 07:34:26.279893235 +0000 UTC m=+0.016146876 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:34:26 np0005558317 systemd[1]: libpod-conmon-6315847e8a2701a2918d3fcbbd01a4414fdf5f60928ce47ed435f322aaae6677.scope: Deactivated successfully.
Dec 13 02:34:26 np0005558317 systemd[1]: var-lib-containers-storage-overlay-6bba7c7961500252a6b8d8253b1442d9b511de710b12eb09faa2d31c55fceff7-merged.mount: Deactivated successfully.
Dec 13 02:34:26 np0005558317 podman[245071]: 2025-12-13 07:34:26.49083105 +0000 UTC m=+0.028663615 container create 0682d19eef7e8565b16413ffc4ae0170de971148d58d8afb637da43336a84d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mahavira, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 02:34:26 np0005558317 systemd[1]: Started libpod-conmon-0682d19eef7e8565b16413ffc4ae0170de971148d58d8afb637da43336a84d32.scope.
Dec 13 02:34:26 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:34:26 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6802ff44602123d0ceb48bff1a0dfc1d4add76caf0f2fd318bbdce5604877e76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:26 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6802ff44602123d0ceb48bff1a0dfc1d4add76caf0f2fd318bbdce5604877e76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:26 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6802ff44602123d0ceb48bff1a0dfc1d4add76caf0f2fd318bbdce5604877e76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:26 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6802ff44602123d0ceb48bff1a0dfc1d4add76caf0f2fd318bbdce5604877e76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:26 np0005558317 podman[245071]: 2025-12-13 07:34:26.546561209 +0000 UTC m=+0.084393783 container init 0682d19eef7e8565b16413ffc4ae0170de971148d58d8afb637da43336a84d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mahavira, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:34:26 np0005558317 podman[245071]: 2025-12-13 07:34:26.553952503 +0000 UTC m=+0.091785057 container start 0682d19eef7e8565b16413ffc4ae0170de971148d58d8afb637da43336a84d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mahavira, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:34:26 np0005558317 podman[245071]: 2025-12-13 07:34:26.555397139 +0000 UTC m=+0.093229694 container attach 0682d19eef7e8565b16413ffc4ae0170de971148d58d8afb637da43336a84d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 02:34:26 np0005558317 nova_compute[241222]: 2025-12-13 07:34:26.568 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:34:26 np0005558317 podman[245071]: 2025-12-13 07:34:26.479524334 +0000 UTC m=+0.017356910 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:34:26 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:27 np0005558317 lvm[245159]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:34:27 np0005558317 lvm[245159]: VG ceph_vg0 finished
Dec 13 02:34:27 np0005558317 lvm[245162]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:34:27 np0005558317 lvm[245162]: VG ceph_vg1 finished
Dec 13 02:34:27 np0005558317 lvm[245165]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:34:27 np0005558317 lvm[245165]: VG ceph_vg2 finished
Dec 13 02:34:27 np0005558317 naughty_mahavira[245084]: {}
Dec 13 02:34:27 np0005558317 systemd[1]: libpod-0682d19eef7e8565b16413ffc4ae0170de971148d58d8afb637da43336a84d32.scope: Deactivated successfully.
Dec 13 02:34:27 np0005558317 podman[245071]: 2025-12-13 07:34:27.186629952 +0000 UTC m=+0.724462507 container died 0682d19eef7e8565b16413ffc4ae0170de971148d58d8afb637da43336a84d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:34:27 np0005558317 systemd[1]: var-lib-containers-storage-overlay-6802ff44602123d0ceb48bff1a0dfc1d4add76caf0f2fd318bbdce5604877e76-merged.mount: Deactivated successfully.
Dec 13 02:34:27 np0005558317 podman[245071]: 2025-12-13 07:34:27.212840136 +0000 UTC m=+0.750672681 container remove 0682d19eef7e8565b16413ffc4ae0170de971148d58d8afb637da43336a84d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mahavira, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 02:34:27 np0005558317 systemd[1]: libpod-conmon-0682d19eef7e8565b16413ffc4ae0170de971148d58d8afb637da43336a84d32.scope: Deactivated successfully.
Dec 13 02:34:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:34:27 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:34:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:34:27 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:34:27 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:34:27 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:34:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:34:28 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:30 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:32 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:34:34 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:34 np0005558317 podman[245203]: 2025-12-13 07:34:34.723102758 +0000 UTC m=+0.063460020 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 02:34:36 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:34:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:34:38
Dec 13 02:34:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:34:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 02:34:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'default.rgw.log', 'images', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'vms', '.rgw.root']
Dec 13 02:34:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:34:38 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:34:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:34:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:34:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:34:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:34:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:34:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:34:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:34:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:34:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:34:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:34:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:34:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:34:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:34:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:34:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:34:40 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:34:41.644 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:34:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:34:41.644 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:34:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:34:41.644 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:34:42 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:34:44 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:45 np0005558317 podman[245226]: 2025-12-13 07:34:45.701357209 +0000 UTC m=+0.040306013 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 02:34:46 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:34:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:34:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:34:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:34:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:34:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:34:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:34:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:34:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:34:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:34:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:34:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:34:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:34:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 02:34:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:34:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:34:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:34:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:34:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:34:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:34:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:34:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:34:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:34:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:34:48 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:50 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:52 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:34:53 np0005558317 podman[245243]: 2025-12-13 07:34:53.692936319 +0000 UTC m=+0.035983571 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 02:34:54 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:56 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:34:58 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:00 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:02 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:35:04 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:05 np0005558317 podman[245260]: 2025-12-13 07:35:05.712067451 +0000 UTC m=+0.054069046 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec 13 02:35:06 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:35:08 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Dec 13 02:35:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:35:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:35:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:35:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:35:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:35:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:35:10 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 02:35:12 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 02:35:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:35:13 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 02:35:13 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4143608827' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 02:35:13 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 02:35:13 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4143608827' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 02:35:14 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 02:35:16 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 02:35:16 np0005558317 podman[245283]: 2025-12-13 07:35:16.700131469 +0000 UTC m=+0.038223442 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 02:35:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:35:18 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 02:35:20 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.776039) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611320776096, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1401, "num_deletes": 251, "total_data_size": 2215775, "memory_usage": 2261264, "flush_reason": "Manual Compaction"}
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611320783925, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2183762, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14922, "largest_seqno": 16322, "table_properties": {"data_size": 2177253, "index_size": 3708, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13496, "raw_average_key_size": 19, "raw_value_size": 2164135, "raw_average_value_size": 3150, "num_data_blocks": 170, "num_entries": 687, "num_filter_entries": 687, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611173, "oldest_key_time": 1765611173, "file_creation_time": 1765611320, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 7905 microseconds, and 6486 cpu microseconds.
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.783954) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2183762 bytes OK
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.783968) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.784312) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.784324) EVENT_LOG_v1 {"time_micros": 1765611320784321, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.784337) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2209581, prev total WAL file size 2209581, number of live WAL files 2.
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.784846) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2132KB)], [35(7320KB)]
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611320784884, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9680075, "oldest_snapshot_seqno": -1}
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4008 keys, 7869657 bytes, temperature: kUnknown
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611320800875, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7869657, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7840711, "index_size": 17828, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 97911, "raw_average_key_size": 24, "raw_value_size": 7766007, "raw_average_value_size": 1937, "num_data_blocks": 756, "num_entries": 4008, "num_filter_entries": 4008, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610001, "oldest_key_time": 0, "file_creation_time": 1765611320, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.801013) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7869657 bytes
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.801315) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 604.1 rd, 491.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 7.1 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(8.0) write-amplify(3.6) OK, records in: 4522, records dropped: 514 output_compression: NoCompression
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.801331) EVENT_LOG_v1 {"time_micros": 1765611320801322, "job": 16, "event": "compaction_finished", "compaction_time_micros": 16024, "compaction_time_cpu_micros": 13279, "output_level": 6, "num_output_files": 1, "total_output_size": 7869657, "num_input_records": 4522, "num_output_records": 4008, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611320801680, "job": 16, "event": "table_file_deletion", "file_number": 37}
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611320802762, "job": 16, "event": "table_file_deletion", "file_number": 35}
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.784778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.802783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.802785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.802786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.802788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:35:20 np0005558317 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.802789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:35:22 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:35:23 np0005558317 nova_compute[241222]: 2025-12-13 07:35:23.568 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:35:23 np0005558317 nova_compute[241222]: 2025-12-13 07:35:23.568 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:35:23 np0005558317 nova_compute[241222]: 2025-12-13 07:35:23.588 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:35:23 np0005558317 nova_compute[241222]: 2025-12-13 07:35:23.589 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:35:23 np0005558317 nova_compute[241222]: 2025-12-13 07:35:23.589 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:35:23 np0005558317 nova_compute[241222]: 2025-12-13 07:35:23.589 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 02:35:23 np0005558317 nova_compute[241222]: 2025-12-13 07:35:23.589 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 02:35:23 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 02:35:23 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4186093368' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 02:35:23 np0005558317 nova_compute[241222]: 2025-12-13 07:35:23.993 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 02:35:24 np0005558317 nova_compute[241222]: 2025-12-13 07:35:24.202 241226 WARNING nova.virt.libvirt.driver [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 02:35:24 np0005558317 nova_compute[241222]: 2025-12-13 07:35:24.203 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5155MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 02:35:24 np0005558317 nova_compute[241222]: 2025-12-13 07:35:24.204 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:35:24 np0005558317 nova_compute[241222]: 2025-12-13 07:35:24.204 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:35:24 np0005558317 nova_compute[241222]: 2025-12-13 07:35:24.247 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 02:35:24 np0005558317 nova_compute[241222]: 2025-12-13 07:35:24.247 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 02:35:24 np0005558317 nova_compute[241222]: 2025-12-13 07:35:24.258 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 02:35:24 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:24 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 02:35:24 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3132218194' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 02:35:24 np0005558317 nova_compute[241222]: 2025-12-13 07:35:24.675 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 02:35:24 np0005558317 nova_compute[241222]: 2025-12-13 07:35:24.680 241226 DEBUG nova.compute.provider_tree [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed in ProviderTree for provider: 1d614cf3-e40f-4742-a628-7a61041be9be update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 02:35:24 np0005558317 nova_compute[241222]: 2025-12-13 07:35:24.690 241226 DEBUG nova.scheduler.client.report [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed for provider 1d614cf3-e40f-4742-a628-7a61041be9be based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 02:35:24 np0005558317 nova_compute[241222]: 2025-12-13 07:35:24.691 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 02:35:24 np0005558317 nova_compute[241222]: 2025-12-13 07:35:24.691 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:35:24 np0005558317 podman[245342]: 2025-12-13 07:35:24.699052416 +0000 UTC m=+0.040042994 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 13 02:35:25 np0005558317 nova_compute[241222]: 2025-12-13 07:35:25.688 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:35:25 np0005558317 nova_compute[241222]: 2025-12-13 07:35:25.688 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:35:25 np0005558317 nova_compute[241222]: 2025-12-13 07:35:25.688 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 02:35:25 np0005558317 nova_compute[241222]: 2025-12-13 07:35:25.688 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 02:35:25 np0005558317 nova_compute[241222]: 2025-12-13 07:35:25.698 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 02:35:25 np0005558317 nova_compute[241222]: 2025-12-13 07:35:25.698 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:35:25 np0005558317 nova_compute[241222]: 2025-12-13 07:35:25.698 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:35:25 np0005558317 nova_compute[241222]: 2025-12-13 07:35:25.698 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:35:25 np0005558317 nova_compute[241222]: 2025-12-13 07:35:25.698 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:35:25 np0005558317 nova_compute[241222]: 2025-12-13 07:35:25.699 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 02:35:26 np0005558317 nova_compute[241222]: 2025-12-13 07:35:26.568 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:35:26 np0005558317 nova_compute[241222]: 2025-12-13 07:35:26.579 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:35:26 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:35:27 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:35:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:35:27 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:35:27 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:35:27 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:35:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:35:28 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 02:35:28 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 02:35:28 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:35:28 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:35:28 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:35:28 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:35:28 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:35:28 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:35:28 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:35:28 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:35:28 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:35:28 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:35:28 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:35:28 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:35:28 np0005558317 podman[245568]: 2025-12-13 07:35:28.431983092 +0000 UTC m=+0.029909169 container create 1032a54cdf35e581f5e2277a99372a8ea2320c9444733f167d7717ccc9f97b76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hugle, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:35:28 np0005558317 systemd[1]: Started libpod-conmon-1032a54cdf35e581f5e2277a99372a8ea2320c9444733f167d7717ccc9f97b76.scope.
Dec 13 02:35:28 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:35:28 np0005558317 podman[245568]: 2025-12-13 07:35:28.480922622 +0000 UTC m=+0.078848699 container init 1032a54cdf35e581f5e2277a99372a8ea2320c9444733f167d7717ccc9f97b76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hugle, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:35:28 np0005558317 podman[245568]: 2025-12-13 07:35:28.486127659 +0000 UTC m=+0.084053736 container start 1032a54cdf35e581f5e2277a99372a8ea2320c9444733f167d7717ccc9f97b76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hugle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:35:28 np0005558317 podman[245568]: 2025-12-13 07:35:28.487220144 +0000 UTC m=+0.085146221 container attach 1032a54cdf35e581f5e2277a99372a8ea2320c9444733f167d7717ccc9f97b76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:35:28 np0005558317 flamboyant_hugle[245581]: 167 167
Dec 13 02:35:28 np0005558317 systemd[1]: libpod-1032a54cdf35e581f5e2277a99372a8ea2320c9444733f167d7717ccc9f97b76.scope: Deactivated successfully.
Dec 13 02:35:28 np0005558317 podman[245568]: 2025-12-13 07:35:28.490199195 +0000 UTC m=+0.088125272 container died 1032a54cdf35e581f5e2277a99372a8ea2320c9444733f167d7717ccc9f97b76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 02:35:28 np0005558317 systemd[1]: var-lib-containers-storage-overlay-84c913734e1da22587a95389b15c181803390908c5e9e07a5ce2644c3fed9733-merged.mount: Deactivated successfully.
Dec 13 02:35:28 np0005558317 podman[245568]: 2025-12-13 07:35:28.509362628 +0000 UTC m=+0.107288705 container remove 1032a54cdf35e581f5e2277a99372a8ea2320c9444733f167d7717ccc9f97b76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hugle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:35:28 np0005558317 podman[245568]: 2025-12-13 07:35:28.419731194 +0000 UTC m=+0.017657282 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:35:28 np0005558317 systemd[1]: libpod-conmon-1032a54cdf35e581f5e2277a99372a8ea2320c9444733f167d7717ccc9f97b76.scope: Deactivated successfully.
Dec 13 02:35:28 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:28 np0005558317 podman[245602]: 2025-12-13 07:35:28.630466548 +0000 UTC m=+0.030003276 container create 190e6acdb6888d084381e05c2f6c23ef1681d3d51b2dba40cd8c09c4ca872879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:35:28 np0005558317 systemd[1]: Started libpod-conmon-190e6acdb6888d084381e05c2f6c23ef1681d3d51b2dba40cd8c09c4ca872879.scope.
Dec 13 02:35:28 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 02:35:28 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:35:28 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:35:28 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:35:28 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:35:28 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdb6fcc18b0769111c5fe90419aabad2a035e2544e1badaf0daa4176f0f80a56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:28 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdb6fcc18b0769111c5fe90419aabad2a035e2544e1badaf0daa4176f0f80a56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:28 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdb6fcc18b0769111c5fe90419aabad2a035e2544e1badaf0daa4176f0f80a56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:28 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdb6fcc18b0769111c5fe90419aabad2a035e2544e1badaf0daa4176f0f80a56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:28 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdb6fcc18b0769111c5fe90419aabad2a035e2544e1badaf0daa4176f0f80a56/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:28 np0005558317 podman[245602]: 2025-12-13 07:35:28.686900771 +0000 UTC m=+0.086437518 container init 190e6acdb6888d084381e05c2f6c23ef1681d3d51b2dba40cd8c09c4ca872879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ellis, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 02:35:28 np0005558317 podman[245602]: 2025-12-13 07:35:28.69331436 +0000 UTC m=+0.092851078 container start 190e6acdb6888d084381e05c2f6c23ef1681d3d51b2dba40cd8c09c4ca872879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ellis, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Dec 13 02:35:28 np0005558317 podman[245602]: 2025-12-13 07:35:28.694546437 +0000 UTC m=+0.094083164 container attach 190e6acdb6888d084381e05c2f6c23ef1681d3d51b2dba40cd8c09c4ca872879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 02:35:28 np0005558317 podman[245602]: 2025-12-13 07:35:28.618901773 +0000 UTC m=+0.018438520 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:35:29 np0005558317 keen_ellis[245615]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:35:29 np0005558317 keen_ellis[245615]: --> All data devices are unavailable
Dec 13 02:35:29 np0005558317 systemd[1]: libpod-190e6acdb6888d084381e05c2f6c23ef1681d3d51b2dba40cd8c09c4ca872879.scope: Deactivated successfully.
Dec 13 02:35:29 np0005558317 podman[245602]: 2025-12-13 07:35:29.069893018 +0000 UTC m=+0.469429745 container died 190e6acdb6888d084381e05c2f6c23ef1681d3d51b2dba40cd8c09c4ca872879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ellis, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:35:29 np0005558317 systemd[1]: var-lib-containers-storage-overlay-fdb6fcc18b0769111c5fe90419aabad2a035e2544e1badaf0daa4176f0f80a56-merged.mount: Deactivated successfully.
Dec 13 02:35:29 np0005558317 podman[245602]: 2025-12-13 07:35:29.090604562 +0000 UTC m=+0.490141289 container remove 190e6acdb6888d084381e05c2f6c23ef1681d3d51b2dba40cd8c09c4ca872879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ellis, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 02:35:29 np0005558317 systemd[1]: libpod-conmon-190e6acdb6888d084381e05c2f6c23ef1681d3d51b2dba40cd8c09c4ca872879.scope: Deactivated successfully.
Dec 13 02:35:29 np0005558317 podman[245704]: 2025-12-13 07:35:29.430995907 +0000 UTC m=+0.029388279 container create 79764660549df75de455581f909107f757ea306672c0645860bc4e60c5452c7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 02:35:29 np0005558317 systemd[1]: Started libpod-conmon-79764660549df75de455581f909107f757ea306672c0645860bc4e60c5452c7a.scope.
Dec 13 02:35:29 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:35:29 np0005558317 podman[245704]: 2025-12-13 07:35:29.488776351 +0000 UTC m=+0.087168723 container init 79764660549df75de455581f909107f757ea306672c0645860bc4e60c5452c7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_kapitsa, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:35:29 np0005558317 podman[245704]: 2025-12-13 07:35:29.493697604 +0000 UTC m=+0.092089976 container start 79764660549df75de455581f909107f757ea306672c0645860bc4e60c5452c7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_kapitsa, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:35:29 np0005558317 podman[245704]: 2025-12-13 07:35:29.494751485 +0000 UTC m=+0.093143858 container attach 79764660549df75de455581f909107f757ea306672c0645860bc4e60c5452c7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_kapitsa, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 02:35:29 np0005558317 friendly_kapitsa[245717]: 167 167
Dec 13 02:35:29 np0005558317 systemd[1]: libpod-79764660549df75de455581f909107f757ea306672c0645860bc4e60c5452c7a.scope: Deactivated successfully.
Dec 13 02:35:29 np0005558317 podman[245704]: 2025-12-13 07:35:29.496927357 +0000 UTC m=+0.095319750 container died 79764660549df75de455581f909107f757ea306672c0645860bc4e60c5452c7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 02:35:29 np0005558317 systemd[1]: var-lib-containers-storage-overlay-67a9a22f64cc1d7546d012a37ce5c3f81e779a9d45ea82baf129a73b599ea542-merged.mount: Deactivated successfully.
Dec 13 02:35:29 np0005558317 podman[245704]: 2025-12-13 07:35:29.513773352 +0000 UTC m=+0.112165724 container remove 79764660549df75de455581f909107f757ea306672c0645860bc4e60c5452c7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_kapitsa, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec 13 02:35:29 np0005558317 podman[245704]: 2025-12-13 07:35:29.42002651 +0000 UTC m=+0.018418904 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:35:29 np0005558317 systemd[1]: libpod-conmon-79764660549df75de455581f909107f757ea306672c0645860bc4e60c5452c7a.scope: Deactivated successfully.
Dec 13 02:35:29 np0005558317 podman[245739]: 2025-12-13 07:35:29.631223943 +0000 UTC m=+0.028120576 container create 6bc2297ae43a6f4d04007652fc49727f2d2842b70ee513269592634711611d69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:35:29 np0005558317 systemd[1]: Started libpod-conmon-6bc2297ae43a6f4d04007652fc49727f2d2842b70ee513269592634711611d69.scope.
Dec 13 02:35:29 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:35:29 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d6d007434ea82bbeb271be5c0c7baa835946b9902e34611b4817be40370769/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:29 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d6d007434ea82bbeb271be5c0c7baa835946b9902e34611b4817be40370769/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:29 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d6d007434ea82bbeb271be5c0c7baa835946b9902e34611b4817be40370769/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:29 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d6d007434ea82bbeb271be5c0c7baa835946b9902e34611b4817be40370769/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:29 np0005558317 podman[245739]: 2025-12-13 07:35:29.689770577 +0000 UTC m=+0.086667211 container init 6bc2297ae43a6f4d04007652fc49727f2d2842b70ee513269592634711611d69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_chaum, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 02:35:29 np0005558317 podman[245739]: 2025-12-13 07:35:29.694530087 +0000 UTC m=+0.091426710 container start 6bc2297ae43a6f4d04007652fc49727f2d2842b70ee513269592634711611d69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_chaum, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:35:29 np0005558317 podman[245739]: 2025-12-13 07:35:29.696096503 +0000 UTC m=+0.092993136 container attach 6bc2297ae43a6f4d04007652fc49727f2d2842b70ee513269592634711611d69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_chaum, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:35:29 np0005558317 podman[245739]: 2025-12-13 07:35:29.620254627 +0000 UTC m=+0.017151280 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:35:29 np0005558317 bold_chaum[245752]: {
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:    "0": [
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:        {
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "devices": [
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "/dev/loop3"
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            ],
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "lv_name": "ceph_lv0",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "lv_size": "21470642176",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "name": "ceph_lv0",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "tags": {
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.cluster_name": "ceph",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.crush_device_class": "",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.encrypted": "0",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.objectstore": "bluestore",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.osd_id": "0",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.type": "block",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.vdo": "0",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.with_tpm": "0"
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            },
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "type": "block",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "vg_name": "ceph_vg0"
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:        }
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:    ],
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:    "1": [
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:        {
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "devices": [
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "/dev/loop4"
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            ],
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "lv_name": "ceph_lv1",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "lv_size": "21470642176",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "name": "ceph_lv1",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "tags": {
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.cluster_name": "ceph",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.crush_device_class": "",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.encrypted": "0",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.objectstore": "bluestore",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.osd_id": "1",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.type": "block",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.vdo": "0",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.with_tpm": "0"
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            },
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "type": "block",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "vg_name": "ceph_vg1"
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:        }
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:    ],
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:    "2": [
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:        {
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "devices": [
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "/dev/loop5"
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            ],
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "lv_name": "ceph_lv2",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "lv_size": "21470642176",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "name": "ceph_lv2",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "tags": {
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.cluster_name": "ceph",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.crush_device_class": "",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.encrypted": "0",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.objectstore": "bluestore",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.osd_id": "2",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.type": "block",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.vdo": "0",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:                "ceph.with_tpm": "0"
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            },
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "type": "block",
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:            "vg_name": "ceph_vg2"
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:        }
Dec 13 02:35:29 np0005558317 bold_chaum[245752]:    ]
Dec 13 02:35:29 np0005558317 bold_chaum[245752]: }
Dec 13 02:35:29 np0005558317 systemd[1]: libpod-6bc2297ae43a6f4d04007652fc49727f2d2842b70ee513269592634711611d69.scope: Deactivated successfully.
Dec 13 02:35:29 np0005558317 conmon[245752]: conmon 6bc2297ae43a6f4d0400 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6bc2297ae43a6f4d04007652fc49727f2d2842b70ee513269592634711611d69.scope/container/memory.events
Dec 13 02:35:29 np0005558317 podman[245739]: 2025-12-13 07:35:29.927248406 +0000 UTC m=+0.324145049 container died 6bc2297ae43a6f4d04007652fc49727f2d2842b70ee513269592634711611d69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 02:35:29 np0005558317 systemd[1]: var-lib-containers-storage-overlay-92d6d007434ea82bbeb271be5c0c7baa835946b9902e34611b4817be40370769-merged.mount: Deactivated successfully.
Dec 13 02:35:29 np0005558317 podman[245739]: 2025-12-13 07:35:29.949354452 +0000 UTC m=+0.346251084 container remove 6bc2297ae43a6f4d04007652fc49727f2d2842b70ee513269592634711611d69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_chaum, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:35:29 np0005558317 systemd[1]: libpod-conmon-6bc2297ae43a6f4d04007652fc49727f2d2842b70ee513269592634711611d69.scope: Deactivated successfully.
Dec 13 02:35:30 np0005558317 podman[245832]: 2025-12-13 07:35:30.292995006 +0000 UTC m=+0.028769326 container create 7c59e14666ed9659ec6b55795844c000235f0748b10bcda548a89a16fac69f22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dirac, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 02:35:30 np0005558317 systemd[1]: Started libpod-conmon-7c59e14666ed9659ec6b55795844c000235f0748b10bcda548a89a16fac69f22.scope.
Dec 13 02:35:30 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:35:30 np0005558317 podman[245832]: 2025-12-13 07:35:30.341816333 +0000 UTC m=+0.077590653 container init 7c59e14666ed9659ec6b55795844c000235f0748b10bcda548a89a16fac69f22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:35:30 np0005558317 podman[245832]: 2025-12-13 07:35:30.346263265 +0000 UTC m=+0.082037585 container start 7c59e14666ed9659ec6b55795844c000235f0748b10bcda548a89a16fac69f22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dirac, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 02:35:30 np0005558317 podman[245832]: 2025-12-13 07:35:30.347424799 +0000 UTC m=+0.083199119 container attach 7c59e14666ed9659ec6b55795844c000235f0748b10bcda548a89a16fac69f22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dirac, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:35:30 np0005558317 wonderful_dirac[245845]: 167 167
Dec 13 02:35:30 np0005558317 systemd[1]: libpod-7c59e14666ed9659ec6b55795844c000235f0748b10bcda548a89a16fac69f22.scope: Deactivated successfully.
Dec 13 02:35:30 np0005558317 podman[245832]: 2025-12-13 07:35:30.349760711 +0000 UTC m=+0.085535031 container died 7c59e14666ed9659ec6b55795844c000235f0748b10bcda548a89a16fac69f22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dirac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 02:35:30 np0005558317 systemd[1]: var-lib-containers-storage-overlay-236175dcb5bb0fccb3b4bf32a381a695496c681b2c9b287be70f55fd31d71ac8-merged.mount: Deactivated successfully.
Dec 13 02:35:30 np0005558317 podman[245832]: 2025-12-13 07:35:30.368222816 +0000 UTC m=+0.103997136 container remove 7c59e14666ed9659ec6b55795844c000235f0748b10bcda548a89a16fac69f22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:35:30 np0005558317 podman[245832]: 2025-12-13 07:35:30.281360088 +0000 UTC m=+0.017134409 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:35:30 np0005558317 systemd[1]: libpod-conmon-7c59e14666ed9659ec6b55795844c000235f0748b10bcda548a89a16fac69f22.scope: Deactivated successfully.
Dec 13 02:35:30 np0005558317 podman[245866]: 2025-12-13 07:35:30.488813791 +0000 UTC m=+0.027051044 container create 3667e80530682efbf25f70ec132af7e9b0936eaa539f9791f24c7c50da590358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 02:35:30 np0005558317 systemd[1]: Started libpod-conmon-3667e80530682efbf25f70ec132af7e9b0936eaa539f9791f24c7c50da590358.scope.
Dec 13 02:35:30 np0005558317 systemd[1]: Started libcrun container.
Dec 13 02:35:30 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1f255afc718aa5b79821a75b5bb2777997d3c5e690f2a206f4f32846dac2f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:30 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1f255afc718aa5b79821a75b5bb2777997d3c5e690f2a206f4f32846dac2f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:30 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1f255afc718aa5b79821a75b5bb2777997d3c5e690f2a206f4f32846dac2f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:30 np0005558317 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1f255afc718aa5b79821a75b5bb2777997d3c5e690f2a206f4f32846dac2f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:30 np0005558317 podman[245866]: 2025-12-13 07:35:30.547185006 +0000 UTC m=+0.085422269 container init 3667e80530682efbf25f70ec132af7e9b0936eaa539f9791f24c7c50da590358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:35:30 np0005558317 podman[245866]: 2025-12-13 07:35:30.553129805 +0000 UTC m=+0.091367057 container start 3667e80530682efbf25f70ec132af7e9b0936eaa539f9791f24c7c50da590358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_poincare, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 02:35:30 np0005558317 podman[245866]: 2025-12-13 07:35:30.55438213 +0000 UTC m=+0.092619383 container attach 3667e80530682efbf25f70ec132af7e9b0936eaa539f9791f24c7c50da590358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_poincare, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 02:35:30 np0005558317 podman[245866]: 2025-12-13 07:35:30.477850938 +0000 UTC m=+0.016088211 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:35:30 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:31 np0005558317 lvm[245956]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:35:31 np0005558317 lvm[245957]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:35:31 np0005558317 lvm[245956]: VG ceph_vg0 finished
Dec 13 02:35:31 np0005558317 lvm[245957]: VG ceph_vg1 finished
Dec 13 02:35:31 np0005558317 lvm[245960]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:35:31 np0005558317 lvm[245960]: VG ceph_vg2 finished
Dec 13 02:35:31 np0005558317 intelligent_poincare[245879]: {}
Dec 13 02:35:31 np0005558317 systemd[1]: libpod-3667e80530682efbf25f70ec132af7e9b0936eaa539f9791f24c7c50da590358.scope: Deactivated successfully.
Dec 13 02:35:31 np0005558317 podman[245866]: 2025-12-13 07:35:31.19083728 +0000 UTC m=+0.729074533 container died 3667e80530682efbf25f70ec132af7e9b0936eaa539f9791f24c7c50da590358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:35:31 np0005558317 systemd[1]: var-lib-containers-storage-overlay-5a1f255afc718aa5b79821a75b5bb2777997d3c5e690f2a206f4f32846dac2f8-merged.mount: Deactivated successfully.
Dec 13 02:35:31 np0005558317 podman[245866]: 2025-12-13 07:35:31.211074973 +0000 UTC m=+0.749312225 container remove 3667e80530682efbf25f70ec132af7e9b0936eaa539f9791f24c7c50da590358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:35:31 np0005558317 systemd[1]: libpod-conmon-3667e80530682efbf25f70ec132af7e9b0936eaa539f9791f24c7c50da590358.scope: Deactivated successfully.
Dec 13 02:35:31 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:35:31 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:35:31 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:35:31 np0005558317 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:35:31 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:35:31 np0005558317 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 02:35:32 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:32 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:35:34 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:36 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:36 np0005558317 podman[245998]: 2025-12-13 07:35:36.711151215 +0000 UTC m=+0.051773010 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 02:35:37 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:35:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:35:38
Dec 13 02:35:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:35:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 02:35:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] pools ['.mgr', 'vms', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'images', 'volumes']
Dec 13 02:35:38 np0005558317 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:35:38 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:35:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:35:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:35:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:35:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:35:39 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:35:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:35:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:35:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:35:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:35:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:35:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:35:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:35:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:35:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:35:39 np0005558317 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:35:40 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:35:41.645 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:35:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:35:41.645 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:35:41 np0005558317 ovn_metadata_agent[154116]: 2025-12-13 07:35:41.646 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:35:42 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:42 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:35:44 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:46 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:47 np0005558317 podman[246021]: 2025-12-13 07:35:47.702955004 +0000 UTC m=+0.039440561 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible)
Dec 13 02:35:47 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:35:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:35:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:35:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:35:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:35:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:35:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:35:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:35:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:35:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:35:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:35:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:35:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:35:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 02:35:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:35:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:35:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:35:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:35:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:35:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:35:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:35:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:35:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:35:48 np0005558317 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:35:48 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:50 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:52 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:52 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:35:54 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:55 np0005558317 podman[246038]: 2025-12-13 07:35:55.692088438 +0000 UTC m=+0.035689447 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 02:35:56 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:57 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:35:58 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:00 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:02 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:02 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:36:04 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:06 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:07 np0005558317 podman[246054]: 2025-12-13 07:36:07.709423853 +0000 UTC m=+0.052756949 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_id=ovn_controller)
Dec 13 02:36:07 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:36:08 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:09 np0005558317 systemd-logind[745]: New session 54 of user zuul.
Dec 13 02:36:09 np0005558317 systemd[1]: Started Session 54 of User zuul.
Dec 13 02:36:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:36:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:36:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:36:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:36:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:36:09 np0005558317 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:36:10 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:11 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14384 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:36:11 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14386 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:36:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 13 02:36:12 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3822048027' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 13 02:36:12 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:12 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:36:13 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 02:36:13 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/231078060' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 02:36:13 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 02:36:13 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/231078060' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 02:36:14 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:16 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:17 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:36:18 np0005558317 podman[246365]: 2025-12-13 07:36:18.076025249 +0000 UTC m=+0.047653024 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd)
Dec 13 02:36:18 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:18 np0005558317 ovs-vsctl[246408]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 13 02:36:19 np0005558317 virtqemud[241006]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 13 02:36:19 np0005558317 virtqemud[241006]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 13 02:36:19 np0005558317 virtqemud[241006]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 13 02:36:19 np0005558317 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: cache status {prefix=cache status} (starting...)
Dec 13 02:36:19 np0005558317 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: client ls {prefix=client ls} (starting...)
Dec 13 02:36:20 np0005558317 lvm[246722]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:36:20 np0005558317 lvm[246722]: VG ceph_vg0 finished
Dec 13 02:36:20 np0005558317 lvm[246726]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:36:20 np0005558317 lvm[246726]: VG ceph_vg2 finished
Dec 13 02:36:20 np0005558317 lvm[246756]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:36:20 np0005558317 lvm[246756]: VG ceph_vg1 finished
Dec 13 02:36:20 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14394 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:36:20 np0005558317 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: damage ls {prefix=damage ls} (starting...)
Dec 13 02:36:20 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:20 np0005558317 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: dump loads {prefix=dump loads} (starting...)
Dec 13 02:36:20 np0005558317 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 13 02:36:20 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14396 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:36:20 np0005558317 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 13 02:36:20 np0005558317 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 13 02:36:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Dec 13 02:36:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1623486156' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 13 02:36:21 np0005558317 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 13 02:36:21 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14400 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:36:21 np0005558317 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 13 02:36:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:36:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2980226997' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:36:21 np0005558317 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 13 02:36:21 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14404 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:36:21 np0005558317 ceph-mgr[75200]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 13 02:36:21 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl[75196]: 2025-12-13T07:36:21.550+0000 7facc0ef1640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 13 02:36:21 np0005558317 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: ops {prefix=ops} (starting...)
Dec 13 02:36:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Dec 13 02:36:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1875113700' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 13 02:36:21 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec 13 02:36:21 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/724077111' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 13 02:36:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec 13 02:36:22 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/5555665' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 13 02:36:22 np0005558317 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: session ls {prefix=session ls} (starting...)
Dec 13 02:36:22 np0005558317 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: status {prefix=status} (starting...)
Dec 13 02:36:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 13 02:36:22 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1994718485' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 13 02:36:22 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14414 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:36:22 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 13 02:36:22 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3909206045' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 13 02:36:22 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:36:22 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14418 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:36:23 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 13 02:36:23 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/113455235' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 13 02:36:23 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Dec 13 02:36:23 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2644590945' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 13 02:36:23 np0005558317 nova_compute[241222]: 2025-12-13 07:36:23.568 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:36:23 np0005558317 nova_compute[241222]: 2025-12-13 07:36:23.586 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:36:23 np0005558317 nova_compute[241222]: 2025-12-13 07:36:23.586 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:36:23 np0005558317 nova_compute[241222]: 2025-12-13 07:36:23.587 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:36:23 np0005558317 nova_compute[241222]: 2025-12-13 07:36:23.587 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 02:36:23 np0005558317 nova_compute[241222]: 2025-12-13 07:36:23.587 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 02:36:23 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 13 02:36:23 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3721358860' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 13 02:36:23 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec 13 02:36:23 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1075121361' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 13 02:36:24 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 02:36:24 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/770081155' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 02:36:24 np0005558317 nova_compute[241222]: 2025-12-13 07:36:24.083 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 02:36:24 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 13 02:36:24 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3949782524' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 13 02:36:24 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14432 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:36:24 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl[75196]: 2025-12-13T07:36:24.315+0000 7facc0ef1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 13 02:36:24 np0005558317 ceph-mgr[75200]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 13 02:36:24 np0005558317 nova_compute[241222]: 2025-12-13 07:36:24.326 241226 WARNING nova.virt.libvirt.driver [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 02:36:24 np0005558317 nova_compute[241222]: 2025-12-13 07:36:24.327 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4989MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 02:36:24 np0005558317 nova_compute[241222]: 2025-12-13 07:36:24.327 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:36:24 np0005558317 nova_compute[241222]: 2025-12-13 07:36:24.327 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:36:24 np0005558317 nova_compute[241222]: 2025-12-13 07:36:24.381 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 02:36:24 np0005558317 nova_compute[241222]: 2025-12-13 07:36:24.381 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 02:36:24 np0005558317 nova_compute[241222]: 2025-12-13 07:36:24.396 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 02:36:24 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:24 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 13 02:36:24 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3167973010' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 13 02:36:24 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 02:36:24 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1272743291' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 02:36:24 np0005558317 nova_compute[241222]: 2025-12-13 07:36:24.853 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 02:36:24 np0005558317 nova_compute[241222]: 2025-12-13 07:36:24.860 241226 DEBUG nova.compute.provider_tree [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed in ProviderTree for provider: 1d614cf3-e40f-4742-a628-7a61041be9be update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 02:36:24 np0005558317 nova_compute[241222]: 2025-12-13 07:36:24.880 241226 DEBUG nova.scheduler.client.report [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed for provider 1d614cf3-e40f-4742-a628-7a61041be9be based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 02:36:24 np0005558317 nova_compute[241222]: 2025-12-13 07:36:24.881 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 02:36:24 np0005558317 nova_compute[241222]: 2025-12-13 07:36:24.881 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:36:24 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec 13 02:36:24 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3242158922' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 13 02:36:25 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14440 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:36:25 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec 13 02:36:25 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/705410132' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 13 02:36:25 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14444 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.614902 4 0.000019
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000039 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.615050 4 0.000017
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.615306 4 0.000011
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000034 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000058 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.615449 4 0.000009
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.615405 4 0.000049
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.615553 4 0.000125
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000049 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000169 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000018 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000191 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000039 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000181 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.615401 4 0.000008
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000030 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000063 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.615886 4 0.000009
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000243 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000271 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.615723 4 0.000116
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000114 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000018 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000711 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616213 4 0.000009
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000026 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616278 4 0.000012
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616331 4 0.000011
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000029 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.615506 4 0.000012
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000020 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000057 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000035 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000064 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616740 4 0.000011
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000024 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616732 4 0.000014
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616541 4 0.000012
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000027 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000026 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616843 4 0.000011
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616687 4 0.000010
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000024 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000029 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000048 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616821 4 0.000008
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616977 4 0.000019
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000025 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616839 4 0.000012
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000010 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000029 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616846 4 0.000011
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000024 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000022 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616833 4 0.000012
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.0( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=47 pruub=9.963310242s) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 43'65 mlcod 0'0 peering pruub 100.238204956s@ mbc={}] exit Started/Primary/Peering/WaitUpThru 0.617906 3 0.000321
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.0( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=47 pruub=9.963310242s) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 43'65 mlcod 0'0 peering pruub 100.238204956s@ mbc={}] exit Started/Primary/Peering 0.618042 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.0( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=47 pruub=9.963310242s) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 43'65 mlcod 0'0 unknown pruub 100.238204956s@ mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.0( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 43'65 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.617012 4 0.000010
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000026 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000022 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.617059 4 0.000011
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.617077 4 0.000008
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.617230 4 0.000011
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000026 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000031 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000045 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000025 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.617118 4 0.000011
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.617334 4 0.000010
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616109 4 0.000011
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000025 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.617682 4 0.000017
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000302 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000323 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000022 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002380 3 0.000094
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002227 3 0.000136
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002089 3 0.000231
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000420 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000438 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002176 3 0.000096
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002138 3 0.000092
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001971 3 0.000336
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001696 3 0.000696
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001238 3 0.000060
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001367 3 0.000940
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001135 3 0.000095
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001127 3 0.000148
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001098 3 0.000910
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001074 3 0.000056
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001020 3 0.000065
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000030 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.0( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 43'65 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002085 3 0.000072
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002046 3 0.000066
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002025 3 0.000148
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002000 3 0.000055
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002250 3 0.000119
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002249 3 0.000059
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002281 3 0.000053
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.0( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=39/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 43'65 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002284 3 0.000025
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.0( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=39/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 43'65 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.0( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=39/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 43'65 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.0( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=39/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 43'65 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002290 3 0.000056
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002292 3 0.000051
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002259 3 0.000064
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002281 3 0.000075
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002285 3 0.000054
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001989 3 0.001149
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001954 3 0.000359
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001954 3 0.000061
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001886 3 0.000473
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002039 3 0.002192
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 278528 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 48 handle_osd_map epochs [49,49], i have 48, src has [1,49]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 204800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 49 heartbeat osd_stat(store_statfs(0x4fe140000/0x0/0x4ffc00000, data 0x3987a/0x86000, compress 0x0/0x0/0x0, omap 0x77ba, meta 0x1a28846), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 204800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 49 handle_osd_map epochs [50,50], i have 49, src has [1,50]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.943325996s of 11.972549438s, submitted: 167
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 458189 data_alloc: 218103808 data_used: 0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 50 handle_osd_map epochs [51,51], i have 50, src has [1,51]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.136144 7 0.000231
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.136141 7 0.000074
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.138559 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.138272 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.138474 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.138631 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.138504 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.138661 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863642693s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896354675s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863589287s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896354675s@ mbc={}] exit Reset 0.000103 1 0.000168
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.136273 7 0.000023
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863589287s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896354675s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.138472 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863589287s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896354675s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.138530 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863589287s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896354675s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863589287s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896354675s@ mbc={}] exit Start 0.000006 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.138547 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863589287s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896354675s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863616943s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896415710s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active+clean] exit Started/Primary/Active/Clean 7.136288 7 0.000021
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started/Primary/Active 7.138290 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started/Primary 7.138363 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863591194s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] exit Reset 0.000044 1 0.000061
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863591194s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started 7.138621 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863591194s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863591194s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863591194s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] exit Start 0.000008 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863591194s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active+clean] exit Started/Primary/Active/Clean 7.136298 7 0.000022
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863552094s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 active pruub 106.896423340s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started/Primary/Active 7.137687 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started/Primary 7.138407 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started 7.138427 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863612175s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 active pruub 106.896522522s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863510132s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896423340s@ mbc={}] exit Reset 0.000077 1 0.000123
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863510132s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896423340s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863510132s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896423340s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863510132s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896423340s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863510132s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896423340s@ mbc={}] exit Start 0.000006 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863510132s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896423340s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863586426s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896522522s@ mbc={}] exit Reset 0.000040 1 0.000061
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863586426s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896522522s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863586426s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896522522s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863586426s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896522522s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863586426s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896522522s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863586426s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896522522s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.134176 7 0.000032
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.136242 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.136280 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.136297 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.865627289s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.898643494s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.136454 7 0.000023
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.865616798s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898643494s@ mbc={}] exit Reset 0.000021 1 0.000036
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.138173 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.865616798s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898643494s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.138454 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.865616798s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898643494s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.865616798s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898643494s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.865616798s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898643494s@ mbc={}] exit Start 0.000006 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.138470 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.865616798s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898643494s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863466263s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896522522s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863451004s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896522522s@ mbc={}] exit Reset 0.000029 1 0.000051
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863451004s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896522522s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863451004s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896522522s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863451004s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896522522s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863451004s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896522522s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863451004s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896522522s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.136560 7 0.000022
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.137792 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.137829 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.137846 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.136590 7 0.000022
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.137751 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.137818 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863352776s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896545410s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.137836 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863339424s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896545410s@ mbc={}] exit Reset 0.000031 1 0.000052
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863339424s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896545410s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863339424s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896545410s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863326073s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896537781s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863339424s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896545410s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863339424s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896545410s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863339424s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896545410s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863314629s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896537781s@ mbc={}] exit Reset 0.000023 1 0.000044
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863314629s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896537781s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863314629s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896537781s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863314629s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896537781s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863314629s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896537781s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863314629s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896537781s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.136732 7 0.000021
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.137775 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.137816 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.135677 7 0.000287
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.137785 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.137819 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.137837 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.864120483s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.897537231s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.864109993s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897537231s@ mbc={}] exit Reset 0.000025 1 0.000045
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.864109993s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897537231s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.864109993s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897537231s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.864109993s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897537231s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.864109993s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897537231s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.864109993s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897537231s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.135769 7 0.000839
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.137834 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.137867 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.137913 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863996506s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.897552490s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863986015s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897552490s@ mbc={}] exit Reset 0.000023 1 0.000071
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863986015s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897552490s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863986015s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897552490s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863986015s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897552490s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863986015s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897552490s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863986015s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897552490s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.135614 7 0.000022
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.137884 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.137922 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.137937 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863911629s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.897605896s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863902092s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897605896s@ mbc={}] exit Reset 0.000022 1 0.000039
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863902092s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897605896s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863902092s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897605896s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863902092s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897605896s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863902092s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897605896s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863902092s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897605896s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.135994 7 0.000851
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.138010 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.138042 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.138244 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862752914s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896553040s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862736702s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896553040s@ mbc={}] exit Reset 0.000033 1 0.000472
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862736702s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896553040s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862736702s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896553040s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862736702s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896553040s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862736702s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896553040s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862736702s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896553040s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active+clean] exit Started/Primary/Active/Clean 7.135837 7 0.000025
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started/Primary/Active 7.138111 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started/Primary 7.138143 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started 7.138158 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.138301 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863280296s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 active pruub 106.897605896s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863239288s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897605896s@ mbc={}] exit Reset 0.000436 1 0.000451
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active+clean] exit Started/Primary/Active/Clean 7.136239 7 0.000027
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863158226s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.897590637s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863239288s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897605896s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863239288s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897605896s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863239288s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897605896s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863135338s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897590637s@ mbc={}] exit Reset 0.000430 1 0.000692
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863239288s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897605896s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863135338s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897590637s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863239288s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897605896s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863135338s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897590637s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863135338s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897590637s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863135338s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897590637s@ mbc={}] exit Start 0.000006 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863135338s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897590637s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started/Primary/Active 7.138622 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started/Primary 7.138716 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.136361 7 0.000027
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.138685 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.138716 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.138731 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started 7.138746 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862961769s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 active pruub 106.897644043s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862939835s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897644043s@ mbc={}] exit Reset 0.000041 1 0.000276
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862939835s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897644043s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863029480s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.897651672s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862827301s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897651672s@ mbc={}] exit Reset 0.000216 1 0.000249
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862827301s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897651672s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862827301s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897651672s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862827301s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897651672s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862827301s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897651672s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862827301s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897651672s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.136609 7 0.000025
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.138921 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.138975 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active+clean] exit Started/Primary/Active/Clean 7.136668 7 0.000022
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started/Primary/Active 7.138975 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started/Primary 7.139007 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started 7.139023 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862700462s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 active pruub 106.897689819s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.138992 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862675667s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897689819s@ mbc={}] exit Reset 0.000036 1 0.000054
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862665176s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.897674561s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862675667s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897689819s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862675667s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897689819s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862675667s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897689819s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862675667s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897689819s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862675667s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897689819s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862626076s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897674561s@ mbc={}] exit Reset 0.000057 1 0.000175
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862626076s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897674561s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862626076s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897674561s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862626076s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897674561s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862626076s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897674561s@ mbc={}] exit Start 0.000022 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862626076s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897674561s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active+clean] exit Started/Primary/Active/Clean 7.136811 7 0.000023
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started/Primary/Active 7.138790 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started/Primary 7.139122 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started 7.139138 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862836838s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 active pruub 106.898017883s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862815857s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.898017883s@ mbc={}] exit Reset 0.000034 1 0.000054
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862815857s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.898017883s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862815857s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.898017883s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862815857s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.898017883s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862815857s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.898017883s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862815857s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.898017883s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.136869 7 0.000031
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.138783 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.139232 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.139247 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862802505s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.898086548s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862786293s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898086548s@ mbc={}] exit Reset 0.000027 1 0.000044
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862786293s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898086548s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862786293s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898086548s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862786293s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898086548s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862786293s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898086548s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862786293s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898086548s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862939835s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897644043s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862939835s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897644043s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.137017 7 0.000022
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.139117 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.139148 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.139180 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862488747s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.898033142s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862475395s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898033142s@ mbc={}] exit Reset 0.000028 1 0.000178
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862475395s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898033142s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862475395s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898033142s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862475395s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898033142s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862475395s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898033142s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862475395s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898033142s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862939835s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897644043s@ mbc={}] exit Start 0.000976 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862939835s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897644043s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.860626221s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896415710s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.860574722s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] exit Reset 0.003190 1 0.003243
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.860574722s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.860574722s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.860574722s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.860574722s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] exit Start 0.000038 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.860574722s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000093 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000016
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000121 1 0.000024
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000035 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000093 1 0.000035
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000108 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000014
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000492 1 0.000024
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000035 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000036 1 0.000019
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000030 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000033 1 0.000018
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000051 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000007
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000206 1 0.000019
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000033 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000008
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000033 1 0.000018
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000036 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000030 1 0.000018
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000035 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000034 1 0.000023
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000031 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000045 1 0.000017
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000030 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000026 1 0.000018
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 51 handle_osd_map epochs [51,51], i have 51, src has [1,51]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000032 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000038 1 0.000017
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000027 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000008
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000018 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000029 1 0.000033
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000030 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000026 1 0.000018
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000034 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000027 1 0.000018
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000033 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000044 1 0.000023
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000027 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000025 1 0.000016
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000031 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000028 1 0.000019
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000063 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000054 1 0.000022
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000033 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000043 1 0.000019
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000050 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000034
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000042 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000059 1 0.000196
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000067 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000035
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000058 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000050 1 0.000164
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010017 2 0.000049
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000039 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010917 2 0.000030
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000032 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000013
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000044 1 0.000036
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004738 2 0.000029
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.004609 2 0.000017
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004486 2 0.000016
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004312 2 0.000028
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004208 2 0.000016
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.004200 2 0.000016
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004107 2 0.000015
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003989 2 0.000017
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003900 2 0.000021
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004089 2 0.000028
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004004 2 0.000015
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003912 2 0.000014
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003844 2 0.000016
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003464 2 0.000024
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003047 2 0.000020
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002905 2 0.000050
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003714 2 0.000019
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003881 2 0.000014
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003139 2 0.000050
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000011 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002807 2 0.000047
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002160 2 0.000025
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 278528 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 51 heartbeat osd_stat(store_statfs(0x4fe13c000/0x0/0x4ffc00000, data 0x3ce7d/0x8c000, compress 0x0/0x0/0x0, omap 0x7cac, meta 0x1a28354), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 51 handle_osd_map epochs [51,52], i have 51, src has [1,52]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 51 handle_osd_map epochs [51,52], i have 52, src has [1,52]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.890346 2 0.000016
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.893868 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.891091 2 0.000097
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.890336 2 0.000160
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.894119 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.895489 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.12( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.892869 2 0.000014
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.12( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.895848 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.12( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.11( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.892939 2 0.000011
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.11( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.896064 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.11( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.892853 2 0.000213
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.896797 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1c( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.893092 2 0.000044
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1c( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.896990 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1c( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.893235 2 0.000018
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.897202 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.12( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.889842 2 0.000168
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.12( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.895873 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.12( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.18( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.893963 2 0.000013
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.18( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.897907 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.18( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.893474 2 0.000032
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.897821 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.4( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.892527 2 0.000070
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.4( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.894756 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.4( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894370 2 0.000017
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.898623 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894798 2 0.000015
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.899337 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.8( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894593 2 0.000012
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.8( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.898757 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.8( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.d( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894921 2 0.000024
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.d( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.899473 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.d( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895229 2 0.000019
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.899897 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.3( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894866 2 0.000010
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.3( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.898923 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.3( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.2( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895431 2 0.000018
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.2( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.900689 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.2( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=51/52 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.15( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895847 2 0.000610
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.15( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.906564 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.15( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] exit Started/Stray 0.913313 7 0.000043
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] exit Started/Stray 0.910568 7 0.001039
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.15( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895820 2 0.000029
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.15( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.907603 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.15( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002557 3 0.002320
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002380 3 0.000042
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] exit Started/Stray 0.911622 7 0.000038
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002374 3 0.000134
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005004 3 0.000508
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002370 3 0.000224
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002325 3 0.000077
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002669 3 0.000044
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002530 3 0.002826
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000090 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002605 3 0.003081
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002607 3 0.000299
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002520 3 0.000058
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002734 3 0.000423
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 52 handle_osd_map epochs [52,52], i have 52, src has [1,52]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.11( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.889786 2 0.000044
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.11( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.898576 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.11( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1a( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.896561 2 0.000015
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1a( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.900612 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1a( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] exit Started/Stray 0.914437 7 0.000117
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003576 3 0.000034
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.003692 3 0.000195
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003168 3 0.000098
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=51/52 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.003231 3 0.000046
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003437 3 0.000047
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000193 1 0.000024
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=51/52 n=1 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003301 3 0.000029
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=51/52 n=1 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=51/52 n=1 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=51/52 n=1 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003696 3 0.000037
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003042 3 0.000263
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002522 3 0.000941
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 52 handle_osd_map epochs [52,52], i have 52, src has [1,52]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004185 3 0.000070
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004314 3 0.005992
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] exit Started/Stray 0.919320 7 0.000227
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] exit Started/Stray 0.917270 7 0.000034
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.921184 7 0.000040
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.920163 7 0.000028
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.920433 7 0.000026
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.920317 7 0.000027
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.919186 7 0.000045
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.918495 7 0.000024
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.918759 7 0.000027
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.922683 7 0.000031
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.923848 7 0.000032
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.923879 7 0.000028
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.923686 7 0.000040
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.924108 7 0.000036
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.922692 7 0.000033
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.922096 7 0.000055
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.924231 7 0.000370
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.921267 7 0.000118
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.041322 2 0.000024
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ReplicaActive 0.041340 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.119749 2 0.000017
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ReplicaActive 0.119771 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 43'2 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.159613 3 0.000023
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 43'2 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.159787 3 0.000069
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 43'2 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 43'2 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.167212 2 0.000020
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ReplicaActive 0.167240 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 36'6 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.069833 1 0.000043
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.224708 1 0.000033
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 36'6 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.224740 1 0.000016
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 36'6 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000068 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 36'6 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.224820 1 0.000013
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.224872 1 0.000012
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.224903 1 0.000013
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.224950 1 0.000013
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.224978 1 0.000012
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.223357 1 0.000023
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.222118 1 0.000028
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.222119 1 0.000029
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.222149 1 0.000016
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.222223 1 0.000014
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.222237 1 0.000015
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.222310 1 0.000015
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.222336 1 0.000148
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.222355 1 0.000040
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.191997 1 0.000032
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.107618 1 0.000040
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.066187 1 0.000083
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1e( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.007515 1 0.000079
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1e( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.232261 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1e( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.153481 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.7( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.014831 1 0.000084
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.7( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.239601 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.7( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.159785 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.4( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.022343 1 0.000039
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.4( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.247196 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.4( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.167651 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.249540 2 0.000013
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ReplicaActive 0.249564 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000044 1 0.000085
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.8( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.030255 1 0.000042
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.8( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.255151 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.8( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.175491 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.036852 1 0.000028
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.261792 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.181015 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.17( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.044307 1 0.000062
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.17( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.269284 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.17( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.187801 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 991232 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.16( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.051557 1 0.000097
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.16( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.276570 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.16( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.195367 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.058926 1 0.000057
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.282317 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.205087 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.066271 1 0.000022
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.288429 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.212318 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.073647 1 0.000071
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.295790 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.219688 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.081025 1 0.000085
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.303208 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.226914 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.088271 1 0.000019
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.310532 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.234678 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.095693 1 0.000061
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.317989 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.240722 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.103029 1 0.000053
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.325374 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.247521 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.110431 1 0.000021
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.332798 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.257185 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.117742 1 0.000016
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.340122 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.261495 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete/Deleting 0.154745 2 0.000061
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete 0.346777 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started 1.299718 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 DELETING pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete/Deleting 0.162080 2 0.000103
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete 0.269733 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started 1.308866 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete/Deleting 0.169736 2 0.000156
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete 0.235980 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started 1.316589 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 DELETING pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete/Deleting 0.154755 2 0.000132
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete 0.154857 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started 1.321735 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.590672 2 0.000016
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ReplicaActive 0.590692 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000060 1 0.000040
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete/Deleting 0.008114 2 0.000119
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete 0.008205 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started 1.510543 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.692428 2 0.000020
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ReplicaActive 0.692445 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000059 1 0.000036
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete/Deleting 0.008140 2 0.000102
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete 0.008245 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started 1.615159 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 52 handle_osd_map epochs [52,53], i have 52, src has [1,53]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 2056192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 466944 data_alloc: 218103808 data_used: 0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 53 heartbeat osd_stat(store_statfs(0x4fe134000/0x0/0x4ffc00000, data 0x41159/0x94000, compress 0x0/0x0/0x0, omap 0x81d4, meta 0x1a27e2c), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 1998848 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 53 handle_osd_map epochs [54,55], i have 53, src has [1,55]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 1990656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 55 heartbeat osd_stat(store_statfs(0x4fe12d000/0x0/0x4ffc00000, data 0x46360/0x9d000, compress 0x0/0x0/0x0, omap 0x8704, meta 0x1a278fc), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 55 handle_osd_map epochs [55,56], i have 55, src has [1,56]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 56 heartbeat osd_stat(store_statfs(0x4fe12d000/0x0/0x4ffc00000, data 0x46360/0x9d000, compress 0x0/0x0/0x0, omap 0x8704, meta 0x1a278fc), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 480563 data_alloc: 218103808 data_used: 0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 56 handle_osd_map epochs [56,57], i have 56, src has [1,57]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.125689507s of 11.183286667s, submitted: 233
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 57 heartbeat osd_stat(store_statfs(0x4fe12c000/0x0/0x4ffc00000, data 0x47daf/0xa0000, compress 0x0/0x0/0x0, omap 0x896a, meta 0x1a27696), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 909312 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 909312 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 909312 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 57 handle_osd_map epochs [58,59], i have 57, src has [1,59]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 901120 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490751 data_alloc: 218103808 data_used: 848
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 59 heartbeat osd_stat(store_statfs(0x4fe11f000/0x0/0x4ffc00000, data 0x4d083/0xa9000, compress 0x0/0x0/0x0, omap 0x8ea4, meta 0x1a2715c), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 884736 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 876544 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 59 handle_osd_map epochs [60,61], i have 59, src has [1,61]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=0 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000062 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=0 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000018
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000321 1 0.000029
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000367 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=0 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000039 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=0 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000015
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000119 1 0.000028
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000017 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000155 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=0 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000035 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=0 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000012
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000053 1 0.000023
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=0 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=0 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000015
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000115 1 0.000028
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000151 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000901 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000970 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 61 handle_osd_map epochs [61,62], i have 61, src has [1,62]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.202425 2 0.000922
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.203433 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.203461 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 62 handle_osd_map epochs [61,62], i have 62, src has [1,62]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000168 1 0.000240
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000045 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.203563 2 0.000042
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.203790 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.203962 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.204592 2 0.000058
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.204989 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.205012 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.204354 2 0.000042
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.204533 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000054 1 0.000109
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.204550 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000055 1 0.000272
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000097 1 0.001025
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 62 handle_osd_map epochs [62,62], i have 62, src has [1,62]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1064960 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 62 handle_osd_map epochs [62,63], i have 62, src has [1,63]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.16( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.002253 6 0.000064
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.16( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.16( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.e( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=43'551 remapped NOTIFY m=8 mbc={}] exit Started/Stray 1.001620 6 0.000031
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.e( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=43'551 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.e( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=43'551 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.6( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] exit Started/Stray 1.002435 6 0.000029
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.6( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.6( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.1e( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] exit Started/Stray 1.003656 6 0.000130
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.1e( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.1e( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.16( v 43'551 lc 42'76 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001608 3 0.000092
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.16( v 43'551 lc 42'76 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.16( v 43'551 lc 42'76 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000279 1 0.000215
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.16( v 43'551 lc 42'76 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.028506 1 0.000025
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.6( v 43'551 lc 42'87 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.030253 3 0.000194
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.6( v 43'551 lc 42'87 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.6( v 43'551 lc 42'87 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000037 1 0.000091
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.6( v 43'551 lc 42'87 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 521577 data_alloc: 218103808 data_used: 848
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.045531 1 0.000022
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.e( v 43'551 lc 42'47 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.076312 3 0.000126
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.e( v 43'551 lc 42'47 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.e( v 43'551 lc 42'47 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000062 1 0.000072
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.e( v 43'551 lc 42'47 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.059847 1 0.000062
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.1e( v 43'551 lc 43'291 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.135387 3 0.000352
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.1e( v 43'551 lc 43'291 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.1e( v 43'551 lc 43'291 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000051 1 0.000052
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.1e( v 43'551 lc 43'291 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.045539 1 0.000060
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.221976280s of 10.248309135s, submitted: 47
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 63 handle_osd_map epochs [60,64], i have 64, src has [1,64]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=0 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000058 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=0 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000014
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000277 1 0.000212
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000320 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=0 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000106 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=0 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000020
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000138 1 0.000271
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.981002 1 0.000030
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 1.011472 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000018 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.013748 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000432 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=0 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000034 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=0 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000040
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000045 1 0.000040
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.935430 1 0.000018
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 1.011307 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.013776 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=0 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000090 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=0 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000356 1 0.000376
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000019
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000156 1 0.000168
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000205 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000263 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000022 1 0.000021
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000130 1 0.000139
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000156 1 0.000027
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000017 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000188 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.829932 1 0.000024
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 1.011206 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.014959 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000034 1 0.000045
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000014 1 0.000021
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.875715 1 0.000043
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 1.012145 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.013787 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000030 1 0.000143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000053 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000075 1 0.000074
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=12
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=12
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001168 3 0.000025
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001154 3 0.000026
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001163 3 0.000025
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=26
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=26
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000991 3 0.000087
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 786432 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x540c3/0xbb000, compress 0x0/0x0/0x0, omap 0x9616, meta 0x1a269ea), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 64 handle_osd_map epochs [64,65], i have 64, src has [1,65]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 64 handle_osd_map epochs [64,65], i have 65, src has [1,65]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991224 2 0.000038
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992342 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991481 2 0.000026
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992688 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.992797 2 0.000038
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.993005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.993023 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000060 1 0.000090
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992015 2 0.000036
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.993251 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.993369 2 0.000224
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.993666 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.993796 2 0.000056
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.993679 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.994241 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.994259 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000089 1 0.000126
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.994522 2 0.000051
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.994852 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.994877 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000368 1 0.000384
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000417 1 0.000440
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992916 2 0.000032
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.994239 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001570 3 0.000118
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001491 3 0.000042
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001140 3 0.000082
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 65 handle_osd_map epochs [65,65], i have 65, src has [1,65]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002090 3 0.000144
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 65 handle_osd_map epochs [65,65], i have 65, src has [1,65]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 65 heartbeat osd_stat(store_statfs(0x4fe107000/0x0/0x4ffc00000, data 0x577d2/0xc1000, compress 0x0/0x0/0x0, omap 0x9b60, meta 0x1a264a0), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.7( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=43'551 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.004936 6 0.000026
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.7( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=43'551 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.7( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=43'551 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=43'551 remapped NOTIFY m=8 mbc={}] exit Started/Stray 1.005246 6 0.000038
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=43'551 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=43'551 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.1f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.006132 6 0.000028
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.1f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.1f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.17( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.005288 6 0.000026
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.17( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.17( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.7( v 43'551 lc 42'63 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001874 3 0.000256
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.7( v 43'551 lc 42'63 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.7( v 43'551 lc 42'63 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000035 1 0.000048
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.7( v 43'551 lc 42'63 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.f( v 43'551 lc 42'41 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.051459 3 0.000400
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.f( v 43'551 lc 42'41 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.049819 1 0.000015
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.f( v 43'551 lc 42'41 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000124 1 0.000024
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.f( v 43'551 lc 42'41 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.059718 1 0.000024
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.1f( v 43'551 lc 42'133 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.111224 3 0.000046
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.1f( v 43'551 lc 42'133 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.1f( v 43'551 lc 42'133 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000037 1 0.000051
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.1f( v 43'551 lc 42'133 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038649 1 0.000022
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.17( v 43'551 lc 42'141 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.149904 3 0.000361
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.17( v 43'551 lc 42'141 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.17( v 43'551 lc 42'141 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000047 1 0.000034
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.17( v 43'551 lc 42'141 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.031595 1 0.000022
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 66 heartbeat osd_stat(store_statfs(0x4fe0ff000/0x0/0x4ffc00000, data 0x59866/0xcd000, compress 0x0/0x0/0x0, omap 0x9dcb, meta 0x1a26235), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=0 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000088 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=0 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000017
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000158 1 0.000038
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000026 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000198 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=0 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000042 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=0 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000011
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000300 1 0.000025
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000337 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 66 handle_osd_map epochs [66,67], i have 67, src has [1,67]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.095232 2 0.000169
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.095603 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.095628 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000038 1 0.000085
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.095846 2 0.000048
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.096063 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.096078 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000080 1 0.000100
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000027 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.839957 1 0.000056
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.989964 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 1.996114 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000033 1 0.000053
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.938852 1 0.000119
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.990699 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 1.995676 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000029 1 0.000061
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.879275 1 0.000023
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.990657 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 1.996137 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000034 1 0.000108
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.808801 1 0.000019
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.990405 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 1.995720 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000036 1 0.000049
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001274 2 0.000023
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 67 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001181 2 0.000020
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000951 2 0.000035
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000866 2 0.000050
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000701 2 0.000086
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=23
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=23
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000593 2 0.000034
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000554 2 0.000018
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=20
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=20
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000475 2 0.000342
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 1384448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.18( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] exit Started/Stray 1.001797 5 0.000025
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.18( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.18( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.8( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=43'551 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.001599 5 0.000050
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.8( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=43'551 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.8( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=43'551 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999528 2 0.000031
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001554 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999595 2 0.000029
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001405 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999561 2 0.000025
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001247 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999739 2 0.000026
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001273 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.18( v 43'551 lc 42'36 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001827 4 0.000080
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.18( v 43'551 lc 42'36 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.18( v 43'551 lc 42'36 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000052 1 0.000049
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.18( v 43'551 lc 42'36 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/54 les/c/f=68/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003868 3 0.000136
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/54 les/c/f=68/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/54 les/c/f=68/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000043 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/54 les/c/f=68/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/55 les/c/f=68/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003900 3 0.000308
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/54 les/c/f=68/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003809 3 0.000153
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/54 les/c/f=68/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/54 les/c/f=68/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/54 les/c/f=68/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003839 3 0.000036
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/54 les/c/f=68/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/55 les/c/f=68/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/55 les/c/f=68/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000040 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/55 les/c/f=68/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/54 les/c/f=68/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/54 les/c/f=68/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/54 les/c/f=68/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.042691 1 0.000023
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.8( v 43'551 lc 42'51 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.044737 4 0.000085
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.8( v 43'551 lc 42'51 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.8( v 43'551 lc 42'51 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000075 1 0.000067
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.8( v 43'551 lc 42'51 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 1269760 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 623972 data_alloc: 218103808 data_used: 848
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052613 1 0.000038
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.976872 1 0.000021
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 1.021516 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.924007 1 0.000026
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.023334 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 1.021514 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.023156 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000043 1 0.000066
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000044 1 0.000060
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000021 1 0.000024
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000064 1 0.000259
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=19
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=19
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001029 3 0.000039
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000797 3 0.000032
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 1179648 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 69 heartbeat osd_stat(store_statfs(0x4fe0f1000/0x0/0x4ffc00000, data 0x5d163/0xd7000, compress 0x0/0x0/0x0, omap 0xa2e0, meta 0x1a25d20), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 69 handle_osd_map epochs [69,70], i have 69, src has [1,70]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999330 2 0.000038
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000237 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=69/70 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999439 2 0.000034
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000524 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=69/70 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=69/70 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=69/70 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=69/70 n=7 ec=47/37 lis/c=69/47 les/c/f=70/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001220 3 0.000081
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=69/70 n=7 ec=47/37 lis/c=69/47 les/c/f=70/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=69/70 n=6 ec=47/37 lis/c=69/47 les/c/f=70/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001162 3 0.000098
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=69/70 n=7 ec=47/37 lis/c=69/47 les/c/f=70/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=69/70 n=6 ec=47/37 lis/c=69/47 les/c/f=70/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=69/70 n=7 ec=47/37 lis/c=69/47 les/c/f=70/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=69/70 n=6 ec=47/37 lis/c=69/47 les/c/f=70/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=69/70 n=6 ec=47/37 lis/c=69/47 les/c/f=70/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 1122304 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 70 handle_osd_map epochs [70,70], i have 70, src has [1,70]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1114112 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 70 heartbeat osd_stat(store_statfs(0x4fe0ed000/0x0/0x4ffc00000, data 0x6062b/0xdd000, compress 0x0/0x0/0x0, omap 0xa7f9, meta 0x1a25807), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 70 heartbeat osd_stat(store_statfs(0x4fe0ed000/0x0/0x4ffc00000, data 0x6062b/0xdd000, compress 0x0/0x0/0x0, omap 0xa7f9, meta 0x1a25807), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1097728 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1089536 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 637611 data_alloc: 218103808 data_used: 1100
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 70 handle_osd_map epochs [70,71], i have 70, src has [1,71]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.967880249s of 10.031354904s, submitted: 124
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1064960 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1056768 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1056768 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 72 heartbeat osd_stat(store_statfs(0x4fe0e7000/0x0/0x4ffc00000, data 0x63d63/0xe3000, compress 0x0/0x0/0x0, omap 0xad16, meta 0x1a252ea), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 1048576 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 72 handle_osd_map epochs [72,73], i have 72, src has [1,73]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 1015808 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 654478 data_alloc: 218103808 data_used: 1685
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=0 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000087 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=0 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000026 1 0.000046
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000058 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000095 1 0.000151
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000037 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000374 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=0 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000070 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=0 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000051 1 0.000023
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000075 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x658ff/0xe6000, compress 0x0/0x0/0x0, omap 0xaf84, meta 0x1a2507c), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 73 handle_osd_map epochs [73,74], i have 73, src has [1,74]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 73 handle_osd_map epochs [73,74], i have 74, src has [1,74]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.886183 2 0.000029
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.886287 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.886303 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000064 1 0.000104
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.886771 2 0.000312
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.887203 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.887311 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000144 1 0.000315
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000101 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 74 handle_osd_map epochs [74,74], i have 74, src has [1,74]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 999424 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 74 heartbeat osd_stat(store_statfs(0x4fe0e1000/0x0/0x4ffc00000, data 0x673b2/0xe9000, compress 0x0/0x0/0x0, omap 0xb237, meta 0x1a24dc9), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.c( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.038255 5 0.000482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.c( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.c( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.1c( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=43'551 remapped NOTIFY m=9 mbc={}] exit Started/Stray 1.039362 5 0.000032
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.1c( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=43'551 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.1c( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=43'551 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.c( v 43'551 lc 42'66 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002599 4 0.000069
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.c( v 43'551 lc 42'66 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.c( v 43'551 lc 42'66 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000093 1 0.000041
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.c( v 43'551 lc 42'66 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035823 1 0.000082
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.1c( v 43'551 lc 42'114 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.038596 4 0.000081
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.1c( v 43'551 lc 42'114 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.1c( v 43'551 lc 42'114 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000035 1 0.000043
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.1c( v 43'551 lc 42'114 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.066842 1 0.000083
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.933198 1 0.000026
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.971775 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.010478 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000041 1 0.000068
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000021 1 0.000025
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.866150 1 0.000028
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.972427 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.011876 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000113 1 0.000951
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000044 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=10
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=10
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001068 3 0.000030
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000036 1 0.000278
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=25
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=25
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000524 3 0.000055
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68378624 unmapped: 819200 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 76 handle_osd_map epochs [77,77], i have 77, src has [1,77]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003146 2 0.000042
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002540 2 0.000050
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004280 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003293 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/47 les/c/f=77/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.000848 4 0.000067
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/47 les/c/f=77/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/47 les/c/f=77/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/47 les/c/f=77/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=7 ec=47/37 lis/c=76/47 les/c/f=77/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001128 4 0.000189
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=7 ec=47/37 lis/c=76/47 les/c/f=77/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=7 ec=47/37 lis/c=76/47 les/c/f=77/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=7 ec=47/37 lis/c=76/47 les/c/f=77/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 77 heartbeat osd_stat(store_statfs(0x4fe0d5000/0x0/0x4ffc00000, data 0x6c5f2/0xf5000, compress 0x0/0x0/0x0, omap 0xb9cc, meta 0x1a24634), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694860 data_alloc: 218103808 data_used: 2278
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.017603874s of 10.048166275s, submitted: 74
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 704512 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 663552 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 699325 data_alloc: 218103808 data_used: 2278
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 78 handle_osd_map epochs [80,80], i have 78, src has [1,80]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 78 handle_osd_map epochs [79,80], i have 78, src has [1,80]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 80 heartbeat osd_stat(store_statfs(0x4fe0cc000/0x0/0x4ffc00000, data 0x718c6/0xfe000, compress 0x0/0x0/0x0, omap 0xbef6, meta 0x1a2410a), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 630784 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 630784 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68599808 unmapped: 598016 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 581632 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 80 handle_osd_map epochs [81,82], i have 80, src has [1,82]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.a scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.a scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 507904 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 718930 data_alloc: 218103808 data_used: 2863
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 499712 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 82 handle_osd_map epochs [82,83], i have 82, src has [1,83]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.031378746s of 10.040954590s, submitted: 12
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 83 heartbeat osd_stat(store_statfs(0x4fe0c6000/0x0/0x4ffc00000, data 0x74ffe/0x104000, compress 0x0/0x0/0x0, omap 0xc167, meta 0x1a23e99), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=0 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000106 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=0 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000020 1 0.000040
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000069 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000103 1 0.000163
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000032 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000181 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.e scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.e scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 491520 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 83 handle_osd_map epochs [84,84], i have 84, src has [1,84]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.925813 2 0.000097
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.926052 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.926197 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000158 1 0.000251
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000045 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 483328 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 85 pg[9.13( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] exit Started/Stray 0.999723 6 0.000118
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 85 pg[9.13( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 85 pg[9.13( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 85 pg[9.13( v 43'551 lc 42'122 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002342 3 0.000090
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 85 pg[9.13( v 43'551 lc 42'122 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 85 pg[9.13( v 43'551 lc 42'122 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000061 1 0.000040
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 85 pg[9.13( v 43'551 lc 42'122 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 85 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.042634 1 0.000079
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 85 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.821190 1 0.000035
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.866330 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 1.866144 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000064 1 0.000090
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004590 2 0.000029
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 86 handle_osd_map epochs [86,86], i have 86, src has [1,86]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=16
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=16
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000234 2 0.000051
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68935680 unmapped: 1310720 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 86 handle_osd_map epochs [87,87], i have 87, src has [1,87]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001079 2 0.000046
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005960 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=86/87 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=86/87 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=86/87 n=6 ec=47/37 lis/c=86/54 les/c/f=87/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.000887 4 0.000118
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=86/87 n=6 ec=47/37 lis/c=86/54 les/c/f=87/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=86/87 n=6 ec=47/37 lis/c=86/54 les/c/f=87/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=86/87 n=6 ec=47/37 lis/c=86/54 les/c/f=87/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 1269760 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746892 data_alloc: 218103808 data_used: 2863
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69001216 unmapped: 1245184 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 89 heartbeat osd_stat(store_statfs(0x4fcf0c000/0x0/0x4ffc00000, data 0x7f367/0x118000, compress 0x0/0x0/0x0, omap 0xd0af, meta 0x2bc2f51), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1171456 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1163264 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1163264 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 89 handle_osd_map epochs [90,91], i have 89, src has [1,91]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1155072 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 762276 data_alloc: 218103808 data_used: 3392
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=43'551 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 39.029935 78 0.000192
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active 39.032118 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary 40.026369 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=43'551 mlcod 0'0 active mbc={}] exit Started 40.026389 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=43'551 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=8.970234871s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=43'551 active pruub 165.223297119s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=8.970201492s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=43'551 unknown NOTIFY pruub 165.223297119s@ mbc={}] exit Reset 0.000104 1 0.000126
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=8.970201492s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=43'551 unknown NOTIFY pruub 165.223297119s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=8.970201492s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=43'551 unknown NOTIFY pruub 165.223297119s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=8.970201492s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=43'551 unknown NOTIFY pruub 165.223297119s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=8.970201492s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=43'551 unknown NOTIFY pruub 165.223297119s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=8.970201492s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=43'551 unknown NOTIFY pruub 165.223297119s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 92 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1122304 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.538240433s of 10.568582535s, submitted: 59
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 92 heartbeat osd_stat(store_statfs(0x4fcf04000/0x0/0x4ffc00000, data 0x85e1e/0x124000, compress 0x0/0x0/0x0, omap 0xd85c, meta 0x2bc27a4), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=-1 lpr=92 pi=[64,92)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.008826 3 0.000054
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=-1 lpr=92 pi=[64,92)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.008953 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=-1 lpr=92 pi=[64,92)/1 crt=43'551 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Reset 0.000128 1 0.000283
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Start 0.000047 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001857 2 0.000141
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 93 handle_osd_map epochs [93,93], i have 93, src has [1,93]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000041 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69156864 unmapped: 1089536 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005813 3 0.000171
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.007861 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 94 handle_osd_map epochs [94,94], i have 94, src has [1,94]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.002699 5 0.000162
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000067 1 0.000066
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000757 1 0.000013
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.028361 2 0.000042
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 1040384 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 94 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.920479 1 0.000068
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active 0.952535 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary 1.960432 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started 1.960527 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=15.050017357s) [0] async=[0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 active pruub 174.272872925s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=15.049875259s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY pruub 174.272872925s@ mbc={}] exit Reset 0.000187 1 0.000262
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=15.049875259s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY pruub 174.272872925s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=15.049875259s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY pruub 174.272872925s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=15.049875259s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY pruub 174.272872925s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=15.049875259s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY pruub 174.272872925s@ mbc={}] exit Start 0.000103 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=15.049875259s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY pruub 174.272872925s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 1040384 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.002731 7 0.000216
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000063 1 0.000057
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] lb MIN local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 DELETING pi=[64,95)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.030581 2 0.000152
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] lb MIN local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete 0.030696 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] lb MIN local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.033587 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 991232 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 773705 data_alloc: 218103808 data_used: 3392
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69263360 unmapped: 983040 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fcef7000/0x0/0x4ffc00000, data 0x8c9f6/0x12f000, compress 0x0/0x0/0x0, omap 0xe284, meta 0x2bc1d7c), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 96 handle_osd_map epochs [96,97], i have 97, src has [1,97]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 97 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19(unlocked)] enter Initial
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=0 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000066 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=0 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000017
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000163 1 0.000034
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000021 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000194 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69263360 unmapped: 983040 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 97 heartbeat osd_stat(store_statfs(0x4fcefd000/0x0/0x4ffc00000, data 0x8c9f6/0x12f000, compress 0x0/0x0/0x0, omap 0xe284, meta 0x2bc1d7c), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.001283 2 0.000039
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.001523 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.001555 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000681 1 0.000754
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 98 handle_osd_map epochs [98,98], i have 98, src has [1,98]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 974848 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 974848 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.a scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 99 pg[9.19( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=43'551 remapped NOTIFY m=9 mbc={}] exit Started/Stray 1.614506 5 0.000052
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 99 pg[9.19( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=43'551 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 99 pg[9.19( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=43'551 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 99 pg[9.19( v 43'551 lc 42'56 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002395 4 0.000164
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 99 pg[9.19( v 43'551 lc 42'56 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 99 pg[9.19( v 43'551 lc 42'56 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000079 1 0.000035
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 99 pg[9.19( v 43'551 lc 42'56 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.a scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 99 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.067183 1 0.000061
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 99 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.330351 1 0.000046
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.400144 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.014701 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000167 1 0.000247
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000080 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000037 1 0.000178
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=25
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=25
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000536 3 0.000053
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000030 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 917504 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 807639 data_alloc: 218103808 data_used: 3392
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.e scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.e scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000152 2 0.000119
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 101 handle_osd_map epochs [100,101], i have 101, src has [1,101]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000882 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=100/101 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=100/101 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=100/101 n=6 ec=47/37 lis/c=100/54 les/c/f=101/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.000877 3 0.000286
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=100/101 n=6 ec=47/37 lis/c=100/54 les/c/f=101/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=100/101 n=6 ec=47/37 lis/c=100/54 les/c/f=101/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=100/101 n=6 ec=47/37 lis/c=100/54 les/c/f=101/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 884736 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 101 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 884736 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fcee5000/0x0/0x4ffc00000, data 0x95223/0x141000, compress 0x0/0x0/0x0, omap 0xef2a, meta 0x2bc10d6), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 884736 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.109385490s of 12.143373489s, submitted: 73
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fcee5000/0x0/0x4ffc00000, data 0x95223/0x141000, compress 0x0/0x0/0x0, omap 0xef2a, meta 0x2bc10d6), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 876544 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 876544 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 813619 data_alloc: 218103808 data_used: 3392
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 868352 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 868352 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 868352 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.e scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.e scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69394432 unmapped: 851968 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 102 heartbeat osd_stat(store_statfs(0x4fcee8000/0x0/0x4ffc00000, data 0x96dbf/0x144000, compress 0x0/0x0/0x0, omap 0xf1fe, meta 0x2bc0e02), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 102 handle_osd_map epochs [103,103], i have 103, src has [1,103]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=76) [2] r=0 lpr=76 crt=43'551 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 40.825406 75 0.000162
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=76) [2] r=0 lpr=76 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active 40.826300 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=76) [2] r=0 lpr=76 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary 41.829615 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=76) [2] r=0 lpr=76 crt=43'551 mlcod 0'0 active mbc={}] exit Started 41.829702 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=76) [2] r=0 lpr=76 crt=43'551 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103 pruub=15.175204277s) [0] r=-1 lpr=103 pi=[76,103)/1 crt=43'551 active pruub 190.297714233s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103 pruub=15.175173759s) [0] r=-1 lpr=103 pi=[76,103)/1 crt=43'551 unknown NOTIFY pruub 190.297714233s@ mbc={}] exit Reset 0.000058 1 0.000103
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103 pruub=15.175173759s) [0] r=-1 lpr=103 pi=[76,103)/1 crt=43'551 unknown NOTIFY pruub 190.297714233s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103 pruub=15.175173759s) [0] r=-1 lpr=103 pi=[76,103)/1 crt=43'551 unknown NOTIFY pruub 190.297714233s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103 pruub=15.175173759s) [0] r=-1 lpr=103 pi=[76,103)/1 crt=43'551 unknown NOTIFY pruub 190.297714233s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103 pruub=15.175173759s) [0] r=-1 lpr=103 pi=[76,103)/1 crt=43'551 unknown NOTIFY pruub 190.297714233s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103 pruub=15.175173759s) [0] r=-1 lpr=103 pi=[76,103)/1 crt=43'551 unknown NOTIFY pruub 190.297714233s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 843776 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 827120 data_alloc: 218103808 data_used: 3392
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=-1 lpr=103 pi=[76,103)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 0.698228 3 0.000032
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=-1 lpr=103 pi=[76,103)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 0.698261 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=-1 lpr=103 pi=[76,103)/1 crt=43'551 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Reset 0.000063 1 0.000090
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000026 1 0.000030
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000022 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 843776 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010217 4 0.000047
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.010297 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70483968 unmapped: 811008 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 105 handle_osd_map epochs [105,105], i have 105, src has [1,105]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.646257 5 0.000585
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000053 1 0.000043
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000314 1 0.000046
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.063613 2 0.000046
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.294860 1 0.000052
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active 1.005319 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary 2.015641 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started 2.015668 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106 pruub=15.640290260s) [0] async=[0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 active pruub 193.476989746s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=43'551 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 60.615667 123 0.000221
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active 60.617189 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary 61.609887 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=43'551 mlcod 0'0 active mbc={}] exit Started 61.609904 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=43'551 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106 pruub=11.385339737s) [0] r=-1 lpr=106 pi=[64,106)/1 crt=43'551 active pruub 189.222137451s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106 pruub=11.385320663s) [0] r=-1 lpr=106 pi=[64,106)/1 crt=43'551 unknown NOTIFY pruub 189.222137451s@ mbc={}] exit Reset 0.000036 1 0.000057
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106 pruub=11.385320663s) [0] r=-1 lpr=106 pi=[64,106)/1 crt=43'551 unknown NOTIFY pruub 189.222137451s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106 pruub=11.385320663s) [0] r=-1 lpr=106 pi=[64,106)/1 crt=43'551 unknown NOTIFY pruub 189.222137451s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106 pruub=11.385320663s) [0] r=-1 lpr=106 pi=[64,106)/1 crt=43'551 unknown NOTIFY pruub 189.222137451s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106 pruub=11.385320663s) [0] r=-1 lpr=106 pi=[64,106)/1 crt=43'551 unknown NOTIFY pruub 189.222137451s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106 pruub=11.385320663s) [0] r=-1 lpr=106 pi=[64,106)/1 crt=43'551 unknown NOTIFY pruub 189.222137451s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106 pruub=15.639621735s) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY pruub 193.476989746s@ mbc={}] exit Reset 0.000805 1 0.000878
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106 pruub=15.639621735s) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY pruub 193.476989746s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106 pruub=15.639621735s) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY pruub 193.476989746s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106 pruub=15.639621735s) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY pruub 193.476989746s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106 pruub=15.639621735s) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY pruub 193.476989746s@ mbc={}] exit Start 0.000096 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106 pruub=15.639621735s) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY pruub 193.476989746s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 106 handle_osd_map epochs [106,106], i have 106, src has [1,106]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70508544 unmapped: 786432 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fced6000/0x0/0x4ffc00000, data 0x9db52/0x150000, compress 0x0/0x0/0x0, omap 0xfbda, meta 0x2bc0426), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=-1 lpr=106 pi=[64,106)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.005952 3 0.000023
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=-1 lpr=106 pi=[64,106)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.005987 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=-1 lpr=106 pi=[64,106)/1 crt=43'551 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Reset 0.000048 1 0.000077
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000226 1 0.000027
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000021 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.006074 7 0.000289
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000038 1 0.000039
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] lb MIN local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=-1 lpr=106 DELETING pi=[76,106)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.067693 2 0.000120
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] lb MIN local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete 0.067764 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] lb MIN local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.074028 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 1753088 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.767161369s of 10.787987709s, submitted: 34
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999221 4 0.000079
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.999535 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 108 handle_osd_map epochs [107,108], i have 108, src has [1,108]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.571800 5 0.000232
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000069 1 0.000069
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70615040 unmapped: 1728512 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 830759 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000510 1 0.000064
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.042439 2 0.000033
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 108 handle_osd_map epochs [109,109], i have 109, src has [1,109]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.396310 1 0.000068
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active 1.011356 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary 2.010920 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started 2.010947 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109 pruub=15.559910774s) [0] async=[0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 active pruub 196.413848877s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109 pruub=15.559823990s) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY pruub 196.413848877s@ mbc={}] exit Reset 0.000193 1 0.000283
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109 pruub=15.559823990s) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY pruub 196.413848877s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109 pruub=15.559823990s) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY pruub 196.413848877s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109 pruub=15.559823990s) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY pruub 196.413848877s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109 pruub=15.559823990s) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY pruub 196.413848877s@ mbc={}] exit Start 0.000068 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109 pruub=15.559823990s) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY pruub 196.413848877s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 1703936 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.035395 6 0.000352
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000863 2 0.000081
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] lb MIN local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=-1 lpr=109 DELETING pi=[64,109)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.045392 2 0.000153
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] lb MIN local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete 0.046324 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] lb MIN local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.081924 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 1638400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 1638400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fced2000/0x0/0x4ffc00000, data 0xa42b4/0x158000, compress 0x0/0x0/0x0, omap 0x1061c, meta 0x2bbf9e4), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 1646592 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fced2000/0x0/0x4ffc00000, data 0xa42b4/0x158000, compress 0x0/0x0/0x0, omap 0x1061c, meta 0x2bbf9e4), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 1646592 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 832198 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fced2000/0x0/0x4ffc00000, data 0xa42b4/0x158000, compress 0x0/0x0/0x0, omap 0x1061c, meta 0x2bbf9e4), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 110 handle_osd_map epochs [110,111], i have 110, src has [1,111]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=43'551 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 65.681385 130 0.000669
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active 65.685412 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary 66.687219 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=43'551 mlcod 0'0 active mbc={}] exit Started 66.687451 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=43'551 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111 pruub=14.319065094s) [1] r=-1 lpr=111 pi=[67,111)/1 crt=43'551 active pruub 200.222534180s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111 pruub=14.319033623s) [1] r=-1 lpr=111 pi=[67,111)/1 crt=43'551 unknown NOTIFY pruub 200.222534180s@ mbc={}] exit Reset 0.000124 1 0.000655
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111 pruub=14.319033623s) [1] r=-1 lpr=111 pi=[67,111)/1 crt=43'551 unknown NOTIFY pruub 200.222534180s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111 pruub=14.319033623s) [1] r=-1 lpr=111 pi=[67,111)/1 crt=43'551 unknown NOTIFY pruub 200.222534180s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111 pruub=14.319033623s) [1] r=-1 lpr=111 pi=[67,111)/1 crt=43'551 unknown NOTIFY pruub 200.222534180s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111 pruub=14.319033623s) [1] r=-1 lpr=111 pi=[67,111)/1 crt=43'551 unknown NOTIFY pruub 200.222534180s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111 pruub=14.319033623s) [1] r=-1 lpr=111 pi=[67,111)/1 crt=43'551 unknown NOTIFY pruub 200.222534180s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 111 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70729728 unmapped: 1613824 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 111 heartbeat osd_stat(store_statfs(0x4fcecf000/0x0/0x4ffc00000, data 0xa5e50/0x15b000, compress 0x0/0x0/0x0, omap 0x10894, meta 0x2bbf76c), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=-1 lpr=111 pi=[67,111)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.008059 3 0.000027
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=-1 lpr=111 pi=[67,111)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.008091 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=-1 lpr=111 pi=[67,111)/1 crt=43'551 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Reset 0.000043 1 0.000067
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000023 1 0.000028
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 1605632 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 112 handle_osd_map epochs [112,113], i have 112, src has [1,113]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 113 handle_osd_map epochs [112,113], i have 113, src has [1,113]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998964 4 0.000043
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.999052 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 113 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.346602 5 0.000253
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000047 1 0.000027
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000256 1 0.000016
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.039714 2 0.000047
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 1597440 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.622825 1 0.000089
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active 1.009670 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary 2.008757 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started 2.008786 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114 pruub=15.336898804s) [1] async=[1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 active pruub 204.257400513s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114 pruub=15.336791992s) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY pruub 204.257400513s@ mbc={}] exit Reset 0.000146 1 0.000224
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114 pruub=15.336791992s) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY pruub 204.257400513s@ mbc={}] enter Started
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114 pruub=15.336791992s) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY pruub 204.257400513s@ mbc={}] enter Start
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114 pruub=15.336791992s) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY pruub 204.257400513s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114 pruub=15.336791992s) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY pruub 204.257400513s@ mbc={}] exit Start 0.000046 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114 pruub=15.336791992s) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY pruub 204.257400513s@ mbc={}] enter Started/Stray
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 114 handle_osd_map epochs [114,114], i have 114, src has [1,114]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 1679360 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 114 handle_osd_map epochs [114,115], i have 114, src has [1,115]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.058753967s of 10.078710556s, submitted: 38
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.004258 7 0.000147
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000055 1 0.000082
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] lb MIN local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=-1 lpr=114 DELETING pi=[67,114)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.037845 2 0.000145
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] lb MIN local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete 0.037941 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] lb MIN local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.042309 0 0.000000
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 1646592 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 840090 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 1646592 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 1638400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 1638400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 1630208 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 1695744 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 843266 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 1695744 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 1687552 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.a scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.a scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 1687552 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 1679360 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.099387169s of 10.109117508s, submitted: 13
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 1679360 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 850509 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 1679360 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 1662976 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 1662976 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 1662976 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 1646592 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857754 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 1646592 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 1638400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.c scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.c scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 1638400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 1630208 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 1630208 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 864993 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.015295982s of 11.023313522s, submitted: 14
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 1622016 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 1622016 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70729728 unmapped: 1613824 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.b scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.b scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70729728 unmapped: 1613824 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 1605632 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 869819 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 1581056 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 1581056 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 1572864 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 1572864 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 1572864 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874647 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 1540096 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.918740273s of 10.926405907s, submitted: 10
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 1531904 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 1523712 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 1523712 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 1523712 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881890 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 1515520 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 1515520 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 1515520 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 1507328 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 1490944 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886720 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 1482752 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 1482752 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.961461067s of 10.966039658s, submitted: 8
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 1474560 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 1474560 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 1466368 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 891548 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70893568 unmapped: 1449984 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 1433600 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.d scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.d scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 1425408 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 1425408 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.d scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.d scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 1409024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903611 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 1400832 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 1400832 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 1400832 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 1392640 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 1392640 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908437 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70959104 unmapped: 1384448 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.983901024s of 13.995039940s, submitted: 18
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 1376256 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 1376256 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 1368064 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 1368064 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913265 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 1359872 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 1359872 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 1351680 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 1351680 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 1351680 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915680 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 1343488 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 1343488 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 1343488 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.931067467s of 11.936884880s, submitted: 6
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 1327104 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 1327104 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920506 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 1327104 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 1327104 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 1318912 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 1318912 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 1310720 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927741 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 1310720 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 1302528 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 1302528 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 1294336 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 1294336 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927741 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.829256058s of 11.836503983s, submitted: 10
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 1294336 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.e scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.e scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 1286144 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 1286144 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 1277952 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 1261568 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937389 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 1253376 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 1253376 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 1253376 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 1245184 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 1245184 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937389 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 1245184 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 1236992 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 1236992 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 1220608 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.025672913s of 14.031677246s, submitted: 8
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 1220608 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939800 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.c scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.c scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 1220608 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 1212416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 1212416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 1204224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 1187840 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942211 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 1187840 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 1171456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.f scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.f scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 1163264 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 1163264 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 1146880 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 1138688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 1138688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 1130496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 1130496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 1122304 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 1122304 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 1122304 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 1114112 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 1114112 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 1097728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 1097728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 1097728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1089536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1089536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1089536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 1081344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 1081344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 1081344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1073152 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1073152 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1064960 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1064960 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 1056768 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 1056768 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 1048576 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 1048576 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 1048576 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 1040384 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 1040384 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 1040384 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 1032192 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 1032192 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 1032192 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 1024000 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 1024000 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 1015808 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 1015808 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 1015808 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 1007616 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 1007616 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 999424 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 999424 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 999424 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 991232 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 999424 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 991232 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 991232 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 991232 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 983040 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 983040 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 974848 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 974848 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 950272 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 950272 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 950272 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 942080 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 942080 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 942080 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 933888 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 933888 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 925696 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 925696 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 925696 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 917504 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 917504 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 909312 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 909312 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 909312 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 901120 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 876544 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 868352 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 868352 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 868352 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 860160 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 860160 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 851968 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 851968 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 851968 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 843776 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 843776 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 835584 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 835584 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 835584 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 827392 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 819200 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 819200 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71532544 unmapped: 811008 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71532544 unmapped: 811008 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 802816 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 786432 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 786432 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 778240 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 778240 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 778240 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 770048 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 770048 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 761856 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 761856 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 761856 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71589888 unmapped: 753664 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71589888 unmapped: 753664 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 745472 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 745472 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 745472 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 737280 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 737280 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 737280 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 729088 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 712704 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 704512 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 704512 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 704512 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 696320 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 696320 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 679936 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 679936 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 671744 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 671744 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 671744 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 663552 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 663552 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 663552 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71688192 unmapped: 655360 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 647168 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71704576 unmapped: 638976 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71712768 unmapped: 630784 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71712768 unmapped: 630784 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71712768 unmapped: 630784 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 622592 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 622592 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 614400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 614400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 606208 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 606208 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 606208 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 598016 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 598016 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 598016 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 581632 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 581632 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 573440 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 573440 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 565248 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 565248 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 565248 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 557056 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 557056 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 548864 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 548864 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 548864 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 540672 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 540672 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 532480 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 532480 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 524288 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 524288 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 516096 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 516096 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71835648 unmapped: 507904 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71835648 unmapped: 507904 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71835648 unmapped: 507904 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71843840 unmapped: 499712 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71843840 unmapped: 499712 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 491520 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 491520 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 491520 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 483328 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 483328 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 475136 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 475136 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 475136 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 475136 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 466944 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 458752 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 450560 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 450560 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 450560 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 442368 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 434176 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 434176 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 434176 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 425984 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 425984 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 417792 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 417792 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 417792 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 409600 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 409600 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 409600 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71942144 unmapped: 401408 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71942144 unmapped: 401408 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 393216 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 393216 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 385024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 368640 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 368640 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 360448 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 360448 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 360448 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 352256 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 352256 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 352256 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71999488 unmapped: 344064 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71999488 unmapped: 344064 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 335872 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 335872 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 335872 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 327680 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 327680 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 319488 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 319488 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 311296 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 311296 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 311296 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 294912 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 294912 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 294912 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 286720 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 286720 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 278528 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 270336 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 270336 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 262144 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 262144 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 253952 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 245760 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 245760 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 237568 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 229376 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 229376 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 221184 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 221184 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 212992 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 212992 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 204800 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 204800 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 204800 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 196608 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 196608 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 172032 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 163840 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 163840 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 163840 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 155648 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 155648 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 139264 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 139264 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 139264 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 122880 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 122880 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 122880 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 106496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 98304 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 81920 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 73728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 73728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 73728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 57344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 57344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 57344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 57344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 57344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 49152 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 49152 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 40960 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 40960 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 32768 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 32768 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 24576 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 24576 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 24576 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 0 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 0 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 0 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 1040384 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 1040384 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 1040384 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 1032192 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 1032192 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 1024000 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 1024000 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 1024000 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 1015808 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 1015808 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 1007616 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 1007616 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 999424 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 999424 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 999424 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72400896 unmapped: 991232 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72400896 unmapped: 991232 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5589 writes, 24K keys, 5589 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5589 writes, 841 syncs, 6.65 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5586 writes, 24K keys, 5586 commit groups, 1.0 writes per commit group, ingest: 18.46 MB, 0.03 MB/s#012Interval WAL: 5587 writes, 841 syncs, 6.64 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 925696 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 925696 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 917504 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 917504 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 909312 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 901120 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 892928 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 892928 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 892928 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 892928 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 884736 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 884736 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 884736 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 876544 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 876544 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 860160 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 860160 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 860160 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 851968 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 851968 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 819200 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 819200 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 819200 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 819200 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 819200 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 811008 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 811008 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 802816 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 802816 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 802816 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 794624 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 794624 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 794624 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 770048 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 770048 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 770048 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 753664 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 753664 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 753664 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 745472 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 745472 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 362.667175293s of 362.673522949s, submitted: 8
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1540096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1540096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 1523712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 1523712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 1523712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 1523712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 1523712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1515520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1515520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1515520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 1507328 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 1507328 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 1490944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 1490944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 1490944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 1482752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 1482752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1458176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 1449984 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 1449984 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1441792 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1441792 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 1425408 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 1425408 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 1417216 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 1417216 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 1417216 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 1400832 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 1400832 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 1400832 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 1392640 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 1392640 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 1376256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 1376256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 1376256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 1368064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 1359872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 1343488 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 1343488 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 1335296 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 1335296 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 1335296 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 1327104 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 1318912 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 1318912 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 1310720 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 1310720 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 1302528 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 1302528 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 1294336 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 1294336 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 1294336 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 1294336 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 1286144 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 1286144 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 1286144 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73162752 unmapped: 1277952 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 1253376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 1245184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 1245184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 1236992 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 1236992 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 1228800 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1220608 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1220608 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1220608 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1212416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1212416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1204224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1204224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1204224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1196032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1196032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1196032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1196032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1196032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1196032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1171456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1171456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1171456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1171456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1171456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1155072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1155072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1155072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1155072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1155072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1155072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1155072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1122304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1122304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1122304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1122304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1122304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1122304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1122304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1122304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1122304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1122304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1073152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1073152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1073152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1073152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1064960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1048576 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1048576 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1048576 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1048576 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1048576 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 974848 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 974848 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 974848 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 974848 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 958464 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 958464 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 958464 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 958464 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 958464 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 958464 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 958464 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 958464 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 958464 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 958464 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 933888 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 933888 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 933888 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 933888 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 933888 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 933888 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 ms_handle_reset con 0x5558b5043000 session 0x5558b746ac40
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: mgrc ms_handle_reset ms_handle_reset con 0x5558b3e8fc00
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/4292604849
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/4292604849,v1:192.168.122.100:6801/4292604849]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: mgrc handle_mgr_configure stats_period=5
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 ms_handle_reset con 0x5558b718b400 session 0x5558b580aa80
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 300.072052002s of 300.109008789s, submitted: 90
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 13 02:36:25 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1486468914' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 958464 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 958464 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 958464 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 958464 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 950272 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 950272 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 950272 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 950272 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 950272 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 925696 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 925696 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 925696 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 925696 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 925696 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 884736 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 884736 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 884736 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 884736 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 884736 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 876544 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 876544 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 876544 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 876544 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 876544 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 868352 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 868352 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 868352 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 868352 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 868352 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 860160 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 860160 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 860160 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 860160 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 860160 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5817 writes, 24K keys, 5817 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5817 writes, 955 syncs, 6.09 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 1.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 1.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.917877197s of 299.925598145s, submitted: 24
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 589824 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 327680 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: do_command 'config diff' '{prefix=config diff}'
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: do_command 'config show' '{prefix=config show}'
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: do_command 'counter dump' '{prefix=counter dump}'
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: do_command 'counter schema' '{prefix=counter schema}'
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 1802240 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 2015232 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 02:36:25 np0005558317 ceph-osd[87155]: do_command 'log dump' '{prefix=log dump}'
Dec 13 02:36:25 np0005558317 nova_compute[241222]: 2025-12-13 07:36:25.881 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:36:25 np0005558317 nova_compute[241222]: 2025-12-13 07:36:25.881 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:36:25 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14448 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:36:25 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.kikquh", "name": "rgw_frontends"} v 0)
Dec 13 02:36:25 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kikquh", "name": "rgw_frontends"} : dispatch
Dec 13 02:36:26 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 13 02:36:26 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3065973922' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 13 02:36:26 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14452 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:36:26 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.kikquh", "name": "rgw_frontends"} v 0)
Dec 13 02:36:26 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kikquh", "name": "rgw_frontends"} : dispatch
Dec 13 02:36:26 np0005558317 nova_compute[241222]: 2025-12-13 07:36:26.563 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:36:26 np0005558317 nova_compute[241222]: 2025-12-13 07:36:26.567 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:36:26 np0005558317 nova_compute[241222]: 2025-12-13 07:36:26.567 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 02:36:26 np0005558317 nova_compute[241222]: 2025-12-13 07:36:26.567 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 02:36:26 np0005558317 nova_compute[241222]: 2025-12-13 07:36:26.581 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 02:36:26 np0005558317 nova_compute[241222]: 2025-12-13 07:36:26.581 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:36:26 np0005558317 nova_compute[241222]: 2025-12-13 07:36:26.581 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 02:36:26 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:26 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 13 02:36:26 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4172151140' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 13 02:36:26 np0005558317 podman[247867]: 2025-12-13 07:36:26.760459982 +0000 UTC m=+0.100722168 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 13 02:36:26 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14456 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:36:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 13 02:36:27 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1857435989' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 13 02:36:27 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14460 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:36:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 13 02:36:27 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1430319980' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 13 02:36:27 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14464 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:36:27 np0005558317 nova_compute[241222]: 2025-12-13 07:36:27.568 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:36:27 np0005558317 nova_compute[241222]: 2025-12-13 07:36:27.568 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:36:27 np0005558317 nova_compute[241222]: 2025-12-13 07:36:27.568 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:36:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:36:27 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14468 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 02:36:27 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec 13 02:36:27 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/390024380' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 13 02:36:28 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14470 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 02:36:28 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14474 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 02:36:28 np0005558317 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:28 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Dec 13 02:36:28 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/464768103' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 13 02:36:28 np0005558317 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14478 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 02:36:28 np0005558317 ceph-mgr[75200]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 13 02:36:28 np0005558317 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl[75196]: 2025-12-13T07:36:28.967+0000 7facc0ef1640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 13 02:36:29 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Dec 13 02:36:29 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1235680705' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/41 les/c/f=50/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/41 les/c/f=50/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/41 les/c/f=50/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/41 les/c/f=50/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/41 les/c/f=50/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/41 les/c/f=50/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001789 3 0.000257
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/41 les/c/f=50/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/41 les/c/f=50/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/41 les/c/f=50/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 4014080 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 4014080 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 50 heartbeat osd_stat(store_statfs(0x4fe085000/0x0/0x4ffc00000, data 0xaf168/0x141000, compress 0x0/0x0/0x0, omap 0x6e32, meta 0x1a291ce), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 4014080 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 50 heartbeat osd_stat(store_statfs(0x4fe08b000/0x0/0x4ffc00000, data 0xaf168/0x141000, compress 0x0/0x0/0x0, omap 0x6e32, meta 0x1a291ce), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 3981312 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 3973120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907256 data_alloc: 218103808 data_used: 10897
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 50 handle_osd_map epochs [51,51], i have 50, src has [1,51]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13(unlocked)] enter Initial
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000523 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000016
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000396 1 0.000040
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.154844 14 0.000167
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.160058 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.160094 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.160113 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.844452858s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718498230s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.844369888s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718498230s@ mbc={}] exit Reset 0.000102 1 0.000128
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.844369888s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718498230s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.844369888s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718498230s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.844369888s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718498230s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.844369888s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718498230s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.844369888s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718498230s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12(unlocked)] enter Initial
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000402 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000022
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000194 1 0.000320
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.145495 7 0.000050
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.147650 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.147709 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.147730 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854372978s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.729553223s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854341507s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.729553223s@ mbc={}] exit Reset 0.000047 1 0.000076
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854341507s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.729553223s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854341507s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.729553223s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854341507s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.729553223s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854341507s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.729553223s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854341507s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.729553223s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.156127 14 0.000037
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.161152 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.161421 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.161437 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.843180656s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718505859s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.843167305s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718505859s@ mbc={}] exit Reset 0.000025 1 0.000047
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.843167305s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718505859s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.843167305s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718505859s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.843167305s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718505859s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.843167305s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718505859s@ mbc={}] exit Start 0.000009 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.843167305s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718505859s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.127038 1 0.000018
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.129538 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.129580 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.129596 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.872672081s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748138428s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.145241 7 0.000026
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.147866 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.147900 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.147915 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854546547s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730064392s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.872431755s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748138428s@ mbc={}] exit Reset 0.000257 1 0.000275
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.872431755s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748138428s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.872431755s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748138428s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.872431755s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748138428s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.872431755s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748138428s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.872431755s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748138428s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11(unlocked)] enter Initial
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000057 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000035 1 0.000040
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854331970s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730064392s@ mbc={}] exit Reset 0.000231 1 0.000262
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854331970s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730064392s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854331970s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730064392s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854331970s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730064392s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854331970s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730064392s@ mbc={}] exit Start 0.000006 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854331970s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730064392s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000138 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000056 1 0.000161
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 51 handle_osd_map epochs [51,51], i have 51, src has [1,51]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10(unlocked)] enter Initial
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000080 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000012
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000063 1 0.000023
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.158256 14 0.000030
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.163532 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.163568 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.163583 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.841082573s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718490601s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.841072083s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] exit Reset 0.000023 1 0.000043
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.841072083s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.841072083s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.841072083s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.841072083s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.841072083s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.147024 7 0.000028
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.149672 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.149704 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.149718 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852649689s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730140686s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852640152s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730140686s@ mbc={}] exit Reset 0.000021 1 0.000037
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852640152s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730140686s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852640152s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730140686s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852640152s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730140686s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852640152s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730140686s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852640152s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730140686s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.129103 1 0.000016
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.131356 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.131396 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.131411 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870583534s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748161316s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870573044s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748161316s@ mbc={}] exit Reset 0.000019 1 0.000041
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870573044s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748161316s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870573044s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748161316s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870573044s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748161316s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870573044s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748161316s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870573044s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748161316s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a(unlocked)] enter Initial
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000015
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000046 1 0.000021
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.129356 1 0.000019
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.131546 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.131656 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.131672 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870308876s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748184204s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870298386s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748184204s@ mbc={}] exit Reset 0.000035 1 0.000039
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870298386s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748184204s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870298386s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748184204s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870298386s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748184204s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870298386s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748184204s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870298386s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748184204s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.158862 14 0.000030
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.164169 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.164250 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.164267 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.840455055s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718482971s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.840443611s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718482971s@ mbc={}] exit Reset 0.000023 1 0.000069
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.840443611s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718482971s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.840443611s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718482971s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.840443611s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718482971s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.840443611s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718482971s@ mbc={}] exit Start 0.000006 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.840443611s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718482971s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.147603 7 0.000022
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.150189 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.150219 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.150233 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852058411s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730171204s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852048874s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730171204s@ mbc={}] exit Reset 0.000020 1 0.000045
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852048874s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730171204s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852048874s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730171204s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852048874s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730171204s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852048874s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730171204s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852048874s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730171204s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19(unlocked)] enter Initial
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000008
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000040 1 0.000032
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6(unlocked)] enter Initial
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000012
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000030 1 0.000019
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.130041 1 0.000018
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.132219 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.132258 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.132286 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869616508s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748191833s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869601250s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] exit Reset 0.000025 1 0.000046
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869601250s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869601250s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869601250s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869601250s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869601250s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.159690 14 0.000097
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.164972 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.165011 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.165025 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839732170s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718414307s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839722633s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718414307s@ mbc={}] exit Reset 0.000018 1 0.000032
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839722633s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718414307s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839722633s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718414307s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839722633s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718414307s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839722633s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718414307s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839722633s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718414307s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.148224 7 0.000032
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.150708 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.150744 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.150785 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.851435661s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730194092s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.851426125s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730194092s@ mbc={}] exit Reset 0.000023 1 0.000039
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.851426125s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730194092s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.851426125s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730194092s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.851426125s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730194092s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.851426125s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730194092s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.851426125s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730194092s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.130215 1 0.000019
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.132428 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.132466 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.132488 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869359016s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748191833s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869349480s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] exit Reset 0.000018 1 0.000038
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869349480s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869349480s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869349480s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869349480s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869349480s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f(unlocked)] enter Initial
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000041 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000049 1 0.000037
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.160334 14 0.000546
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.165404 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.165435 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.165449 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839279175s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718368530s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839269638s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] exit Reset 0.000021 1 0.000042
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839269638s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839269638s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839269638s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839269638s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839269638s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.130631 1 0.000019
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.132753 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.132786 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.132802 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869021416s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748207092s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868998528s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748207092s@ mbc={}] exit Reset 0.000034 1 0.000060
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868998528s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748207092s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868998528s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748207092s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868998528s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748207092s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868998528s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748207092s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868998528s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748207092s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.160821 14 0.000751
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.165677 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.165709 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.165726 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839054108s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718353271s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839038849s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718353271s@ mbc={}] exit Reset 0.000036 1 0.000044
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839038849s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718353271s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839038849s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718353271s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839038849s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718353271s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839038849s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718353271s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839038849s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718353271s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.148840 7 0.000020
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.151261 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.151305 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.151320 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850866318s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730262756s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850856781s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730262756s@ mbc={}] exit Reset 0.000019 1 0.000032
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850856781s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730262756s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850856781s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730262756s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850856781s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730262756s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850856781s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730262756s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850856781s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730262756s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.130839 1 0.000141
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.132983 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.133015 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.133028 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868752480s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748222351s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868742943s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748222351s@ mbc={}] exit Reset 0.000017 1 0.000030
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868742943s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748222351s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868742943s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748222351s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868742943s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748222351s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868742943s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748222351s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868742943s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748222351s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.148968 7 0.000023
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.151414 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.151451 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.151466 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850701332s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730285645s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850690842s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] exit Reset 0.000019 1 0.000048
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850690842s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850690842s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850690842s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850690842s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850690842s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.131100 1 0.000018
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.133129 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.133163 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.133177 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868589401s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748245239s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868580818s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748245239s@ mbc={}] exit Reset 0.000018 1 0.000032
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868580818s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748245239s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868580818s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748245239s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868580818s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748245239s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868580818s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748245239s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868580818s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748245239s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.149167 7 0.000020
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.151593 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.151627 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.151642 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850529671s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730285645s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850520134s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] exit Reset 0.000019 1 0.000041
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850520134s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850520134s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850520134s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850520134s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850520134s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.161431 14 0.000036
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.166370 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.166408 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.166421 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838470459s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718345642s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838461876s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718345642s@ mbc={}] exit Reset 0.000019 1 0.000041
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838461876s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718345642s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838461876s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718345642s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838461876s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718345642s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838461876s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718345642s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838461876s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718345642s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.131392 1 0.000017
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.133017 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.133114 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.133362 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868296623s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748268127s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868269920s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748268127s@ mbc={}] exit Reset 0.000036 1 0.000052
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868269920s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748268127s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868269920s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748268127s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868269920s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748268127s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868269920s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748268127s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868269920s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748268127s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.160914 14 0.000032
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.166137 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.166622 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.166637 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838410378s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718490601s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838400841s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] exit Reset 0.000019 1 0.000039
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838400841s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838400841s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838400841s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838400841s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838400841s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.161123 14 0.000037
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.166479 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.166778 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.166792 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838177681s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718376160s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838152885s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718376160s@ mbc={}] exit Reset 0.000048 1 0.000046
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838152885s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718376160s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838152885s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718376160s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838152885s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718376160s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838152885s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718376160s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838152885s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718376160s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.131732 1 0.000017
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.133343 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.133379 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.133394 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867964745s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748283386s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867954254s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748283386s@ mbc={}] exit Reset 0.000018 1 0.000032
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867954254s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748283386s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867954254s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748283386s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867954254s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748283386s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867954254s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748283386s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867954254s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748283386s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.149776 7 0.000195
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.152013 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.152046 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.152060 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849902153s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730323792s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849894524s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730323792s@ mbc={}] exit Reset 0.000018 1 0.000039
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849894524s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730323792s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849894524s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730323792s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849894524s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730323792s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849894524s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730323792s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849894524s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730323792s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.131890 1 0.000017
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.133445 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.133478 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.133492 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867810249s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748298645s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867800713s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748298645s@ mbc={}] exit Reset 0.000017 1 0.000029
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867800713s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748298645s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867800713s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748298645s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867800713s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748298645s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867800713s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748298645s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867800713s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748298645s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.162178 14 0.000152
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.167229 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.167301 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.167314 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.837768555s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718368530s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.837759018s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] exit Reset 0.000017 1 0.000037
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.837759018s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.837759018s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.837759018s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.837759018s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.837759018s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.150006 7 0.000027
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.152168 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.152206 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.152219 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849675179s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730346680s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849659920s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730346680s@ mbc={}] exit Reset 0.000024 1 0.000047
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849659920s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730346680s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849659920s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730346680s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849659920s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730346680s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849659920s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730346680s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849659920s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730346680s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.131858 1 0.000028
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.133555 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.133619 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.133636 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868008614s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748779297s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867999077s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] exit Reset 0.000018 1 0.000036
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867999077s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867999077s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867999077s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867999077s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867999077s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.162521 14 0.000033
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.167680 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.167718 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.167732 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.132568 1 0.000017
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.135033 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.135076 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.135091 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867127419s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748153687s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867115974s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748153687s@ mbc={}] exit Reset 0.000036 1 0.000046
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867115974s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748153687s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867115974s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748153687s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867115974s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748153687s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867115974s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748153687s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867115974s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748153687s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 43'551 mlcod 43'551 active+clean] exit Started/Primary/Active/Clean 7.150427 7 0.000021
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 43'551 mlcod 43'551 active mbc={}] exit Started/Primary/Active 7.152517 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 43'551 mlcod 43'551 active mbc={}] exit Started/Primary 7.152551 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 43'551 mlcod 43'551 active mbc={}] exit Started 7.152564 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 43'551 mlcod 43'551 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849251747s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 43'551 active pruub 109.730361938s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849235535s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 43'551 unknown NOTIFY pruub 109.730361938s@ mbc={}] exit Reset 0.000025 1 0.000039
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849235535s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 43'551 unknown NOTIFY pruub 109.730361938s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849235535s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 43'551 unknown NOTIFY pruub 109.730361938s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849235535s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 43'551 unknown NOTIFY pruub 109.730361938s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849235535s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 43'551 unknown NOTIFY pruub 109.730361938s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849235535s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 43'551 unknown NOTIFY pruub 109.730361938s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.132270 1 0.000127
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.133712 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.133877 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.133891 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867587090s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748779297s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867555618s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] exit Reset 0.000040 1 0.000055
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867555618s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867555618s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867555618s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867555618s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867555618s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.162931 14 0.000046
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.168137 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.168185 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.168209 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836945534s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718269348s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836936951s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718269348s@ mbc={}] exit Reset 0.000019 1 0.000042
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836936951s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718269348s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836936951s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718269348s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836936951s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718269348s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836936951s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718269348s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836936951s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718269348s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.132446 1 0.000017
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.133770 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.133812 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.133827 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867397308s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748802185s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867383003s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748802185s@ mbc={}] exit Reset 0.000032 1 0.000044
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867383003s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748802185s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867383003s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748802185s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867383003s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748802185s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867383003s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748802185s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867383003s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748802185s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.132196 1 0.000030
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.133306 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.133768 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.133781 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867756844s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.749267578s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867748260s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749267578s@ mbc={}] exit Reset 0.000018 1 0.000031
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867748260s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749267578s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867748260s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749267578s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867748260s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749267578s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867748260s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749267578s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867748260s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749267578s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.150859 7 0.000021
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.152855 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.152892 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.152905 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848821640s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730400085s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848813057s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730400085s@ mbc={}] exit Reset 0.000016 1 0.000030
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848813057s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730400085s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848813057s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730400085s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848813057s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730400085s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848813057s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730400085s@ mbc={}] exit Start 0.000011 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848813057s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730400085s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.163255 14 0.000037
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.168361 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.168423 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.168438 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.132769 1 0.000017
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.133900 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.133939 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.133955 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867094994s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748840332s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867082596s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748840332s@ mbc={}] exit Reset 0.000022 1 0.000039
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867082596s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748840332s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867082596s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748840332s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867082596s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748840332s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867082596s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748840332s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867082596s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748840332s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.163662 14 0.000049
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.168911 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.168944 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.168957 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836301804s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718193054s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.151237 7 0.000020
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.153162 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.153193 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.153205 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836292267s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718193054s@ mbc={}] exit Reset 0.000036 1 0.000043
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836292267s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718193054s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836292267s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718193054s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836292267s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718193054s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836292267s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718193054s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836292267s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718193054s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.132824 1 0.000033
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.134194 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.134250 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.134265 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867068291s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.749168396s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867057800s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749168396s@ mbc={}] exit Reset 0.000021 1 0.000041
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867057800s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749168396s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867057800s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749168396s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867057800s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749168396s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867057800s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749168396s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867057800s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749168396s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.164020 14 0.000252
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.169416 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.169450 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.169468 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835968971s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718185425s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835960388s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718185425s@ mbc={}] exit Reset 0.000019 1 0.000051
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835960388s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718185425s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835960388s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718185425s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835960388s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718185425s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835960388s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718185425s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835960388s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718185425s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.133012 1 0.000035
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.134360 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.134397 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.134413 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866882324s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.749176025s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866874695s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749176025s@ mbc={}] exit Reset 0.000018 1 0.000042
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866874695s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749176025s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866874695s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749176025s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866874695s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749176025s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866874695s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749176025s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866874695s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749176025s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.151621 7 0.000021
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.153481 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.153516 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.153528 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848036766s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730422974s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848023415s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730422974s@ mbc={}] exit Reset 0.000023 1 0.000042
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848023415s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730422974s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848023415s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730422974s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848023415s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730422974s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848023415s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730422974s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848023415s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730422974s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.164302 14 0.000038
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.169803 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.169843 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.169858 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835701942s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718177795s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835692406s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718177795s@ mbc={}] exit Reset 0.000018 1 0.000037
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835692406s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718177795s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835692406s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718177795s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835692406s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718177795s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835692406s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718177795s@ mbc={}] exit Start 0.000010 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835692406s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718177795s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.133252 1 0.000021
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.134472 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.134514 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.134622 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866633415s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.749198914s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866624832s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749198914s@ mbc={}] exit Reset 0.000023 1 0.000042
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866624832s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749198914s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866624832s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749198914s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866624832s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749198914s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866624832s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749198914s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866624832s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749198914s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.133396 1 0.000021
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.134608 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.134653 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.134666 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866514206s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.749183655s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866498947s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749183655s@ mbc={}] exit Reset 0.000023 1 0.000037
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866498947s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749183655s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866498947s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749183655s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866498947s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749183655s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866498947s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749183655s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866498947s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749183655s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.167440 14 0.000034
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.170235 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.170285 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.170301 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.832565308s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.715339661s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.832555771s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715339661s@ mbc={}] exit Reset 0.000019 1 0.000038
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.832555771s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715339661s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.832555771s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715339661s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.832555771s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715339661s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.832555771s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715339661s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.832555771s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715339661s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.151096 7 0.000027
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.153694 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.153733 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.153753 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848815918s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.731674194s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848808289s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.731674194s@ mbc={}] exit Reset 0.000040 1 0.000044
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848808289s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.731674194s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848808289s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.731674194s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848808289s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.731674194s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848808289s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.731674194s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848808289s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.731674194s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.133652 1 0.000026
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.134843 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.134878 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.134891 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866249084s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.749206543s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866222382s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749206543s@ mbc={}] exit Reset 0.000035 1 0.000049
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866222382s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749206543s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866222382s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749206543s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866222382s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749206543s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866222382s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749206543s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866222382s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749206543s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.164683 14 0.000036
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.169988 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.170044 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.170061 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835205078s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718284607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835196495s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] exit Reset 0.000018 1 0.000033
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835196495s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835196495s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835196495s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835196495s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835196495s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.132977 1 0.000023
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.134797 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.134853 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.134869 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866911888s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.750068665s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866901398s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.750068665s@ mbc={}] exit Reset 0.000020 1 0.000039
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866901398s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.750068665s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866901398s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.750068665s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866901398s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.750068665s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866901398s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.750068665s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866901398s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.750068665s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.152461 7 0.000021
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.154179 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.154213 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.154226 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.847294807s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730545044s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.847285271s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730545044s@ mbc={}] exit Reset 0.000018 1 0.000032
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.847285271s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730545044s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.847285271s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730545044s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.847285271s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730545044s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.847285271s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730545044s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.847285271s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730545044s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.168061 14 0.000073
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.170824 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.170888 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.834938049s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718284607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.834763527s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] exit Reset 0.001858 1 0.001876
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.834763527s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.834763527s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.834763527s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.834763527s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.834763527s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14(unlocked)] enter Initial
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000035 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000014
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000034 1 0.000021
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2(unlocked)] enter Initial
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000012
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000036 1 0.000018
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b(unlocked)] enter Initial
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000036 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000032 1 0.000047
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.846091270s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730407715s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.846067429s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730407715s@ mbc={}] exit Reset 0.001006 1 0.002389
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.846067429s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730407715s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.846067429s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730407715s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.846067429s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730407715s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.846067429s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730407715s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.846067429s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730407715s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008280 2 0.000042
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010610 2 0.000042
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.009459 2 0.000034
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.170912 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.830695152s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.715332031s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.830681801s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715332031s@ mbc={}] exit Reset 0.000030 1 0.001343
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.830681801s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715332031s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.830681801s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715332031s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.830681801s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715332031s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.830681801s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715332031s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.830681801s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715332031s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.833575249s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718284607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.833558083s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] exit Reset 0.003823 1 0.003836
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.833558083s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.833558083s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.833558083s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.833558083s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.833558083s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.136471 1 0.000029
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.138984 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.139048 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.139074 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.863280296s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748130798s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.863265038s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748130798s@ mbc={}] exit Reset 0.000029 1 0.000067
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.863265038s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748130798s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.863265038s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748130798s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.863265038s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748130798s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.863265038s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748130798s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.863265038s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748130798s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008039 2 0.000020
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007702 2 0.000029
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007314 2 0.000027
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007206 2 0.000019
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006710 2 0.000017
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006215 2 0.000017
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006138 2 0.000018
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.006461 2 0.000021
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 3375104 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 51 handle_osd_map epochs [51,52], i have 51, src has [1,52]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 51 handle_osd_map epochs [52,52], i have 52, src has [1,52]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 51 handle_osd_map epochs [52,52], i have 52, src has [1,52]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.904077 3 0.000028
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.904112 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000041 1 0.000063
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.904702 3 0.000039
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.904732 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.903099 2 0.000023
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.914142 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.904808 6 0.000421
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000043 1 0.000062
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000007 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000020 1 0.000043
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000017 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.905128 6 0.000052
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.902542 2 0.000032
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.910668 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.903479 2 0.000025
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.911842 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.905503 6 0.000063
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.905647 3 0.000019
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.905669 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.903736 3 0.000023
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.903753 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000049 1 0.000071
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000029 1 0.000038
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000027 1 0.000039
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000038 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000148 1 0.000159
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.906525 6 0.000056
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000025 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.906732 3 0.000025
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.906755 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000933 1 0.000942
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000025 1 0.000048
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000014 1 0.000024
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.906344 6 0.000051
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.902964 2 0.000042
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.910755 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.902840 2 0.000024
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.910215 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.902852 2 0.000020
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.910109 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.907968 6 0.000028
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.908185 3 0.000021
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.908199 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.907855 3 0.000028
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000023 1 0.000030
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.907969 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000013 1 0.000025
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 43'551 unknown NOTIFY mbc={}] exit Started/Stray 0.907554 3 0.000019
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 43'551 unknown NOTIFY mbc={}] exit Started 0.907573 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 43'551 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped mbc={}] exit Reset 0.000030 1 0.000043
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.898652 2 0.000037
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.904929 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.898603 2 0.000021
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.904797 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000016 1 0.000026
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.908710 6 0.000025
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000152 1 0.000251
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.908953 6 0.000488
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000018 1 0.000027
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.909133 3 0.000019
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.909149 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000023 1 0.000034
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000034 1 0.000046
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.909328 6 0.000056
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.909457 3 0.000021
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.909479 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.909598 6 0.000066
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000028 1 0.000044
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.909695 3 0.000019
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.909710 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000015 1 0.000023
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000038 1 0.000046
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000030 1 0.000023
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.903486 2 0.000023
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.910264 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.910482 3 0.000023
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.910502 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000024 1 0.000037
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000018 1 0.000020
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000959 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000022 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.911869 3 0.000021
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.911883 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.911348 6 0.000057
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.911269 3 0.000025
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.911287 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000019 1 0.000025
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.904869 2 0.000025
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.914551 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 1 0.000022
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000030 1 0.000042
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 lc 43'17 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000016 1 0.000027
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.913458 3 0.000305
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.913478 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.899315 2 0.000036
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.905835 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000023 1 0.000031
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 lc 43'57 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.914317 3 0.000023
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000012 1 0.000022
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.914333 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000033 1 0.000042
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000013 1 0.000023
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000019 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.001015 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.003233 2 0.000259
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.909417 7 0.000051
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.907950 7 0.000063
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.908644 7 0.000052
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.909642 7 0.000050
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.906263 7 0.000027
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.909766 7 0.000148
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.908770 7 0.000024
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.916859 7 0.000234
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.909022 7 0.000117
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.907858 7 0.000026
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.907518 7 0.000026
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.913146 7 0.000041
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.912466 7 0.000032
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.911735 7 0.000037
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.003062 2 0.000017
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.906137 7 0.000026
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.910350 7 0.000023
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.910892 7 0.000042
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002835 2 0.000017
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002702 2 0.000014
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.003374 2 0.000692
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.912413 7 0.000029
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002173 2 0.000016
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.910230 7 0.000036
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.910789 7 0.000025
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000358 1 0.000013
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000431 1 0.000013
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000434 1 0.000009
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.906781 7 0.000027
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.914099 7 0.000049
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.909001 7 0.000032
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.912748 7 0.000025
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000495 1 0.000037
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000671 1 0.000007
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000726 1 0.000008
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000992 1 0.000007
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 lc 43'17 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 lc 43'57 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001250 1 0.000013
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004414 4 0.000078
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004713 4 0.000488
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004394 4 0.000034
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003871 4 0.000036
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003878 4 0.000030
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003836 4 0.000027
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003671 4 0.000029
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003652 4 0.000024
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003220 4 0.000034
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 lc 43'17 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.003020 4 0.000040
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 lc 43'17 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 lc 43'57 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002971 4 0.000036
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 lc 43'57 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001463 1 0.000007
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000109 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.000041 1 0.000014
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 lc 43'17 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000078 1 0.000026
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 lc 43'17 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 lc 43'17 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 lc 43'17 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.913643 7 0.000063
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.912474 7 0.000050
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.912819 7 0.000056
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.913602 7 0.000060
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.915229 7 0.000055
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.917095 7 0.000074
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.917488 7 0.000381
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.911429 7 0.000050
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.909765 7 0.000094
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.004166 1 0.000037
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.007426 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.912297 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.011331 1 0.000015
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.014412 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.919951 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.018737 1 0.000015
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.021594 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.928140 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.026096 1 0.000021
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.028820 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.935183 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.033449 1 0.000014
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.036847 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.941994 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.040787 1 0.000015
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.042982 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.952598 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.048131 1 0.000008
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.048510 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.957946 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.055443 1 0.000019
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.055899 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.963868 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.062814 1 0.000011
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.063266 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.972925 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.069987 1 0.000016
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.070682 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.976961 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.077464 1 0.000172
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.078141 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.986803 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.084656 1 0.000111
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.085492 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.995275 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1d( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.091900 1 0.000034
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1d( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.092921 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1d( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.001707 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1f( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.099009 1 0.000031
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1f( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.100279 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1f( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.009324 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.116569 2 0.000027
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 lc 43'57 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.116718 2 0.000031
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 lc 43'57 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 lc 43'57 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 lc 43'57 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 pct=0'0 crt=43'2 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.158600 3 0.000023
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 pct=0'0 crt=43'2 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.158619 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 pct=0'0 crt=43'2 lcod 0'0 active mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 pct=0'0 crt=43'2 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.228964 3 0.000014
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.228978 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.129001 1 0.000046
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247234 1 0.000007
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247326 1 0.000007
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247375 1 0.000007
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247438 1 0.000007
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247427 1 0.000120
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247445 1 0.000009
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247474 1 0.000013
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247412 1 0.000127
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247453 1 0.000252
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247452 1 0.000009
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247483 1 0.000011
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247376 1 0.000011
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247290 1 0.000041
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247302 1 0.000009
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247322 1 0.000009
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.246440 2 0.000010
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.246432 1 0.000025
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.246456 1 0.000014
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.246491 1 0.000013
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.246509 1 0.000011
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.246547 1 0.000029
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.246550 1 0.000008
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.246578 1 0.000018
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.246597 1 0.000014
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.246720 1 0.000068
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 pct=0'0 crt=43'2 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.091966 1 0.000038
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 pct=0'0 crt=43'2 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.021246 1 0.000030
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.18( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.066959 1 0.000060
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.18( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.314247 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.18( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.221782 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.316796 3 0.000017
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.316833 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000039 1 0.000069
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.074203 1 0.000029
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.321560 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.234724 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.f( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.081422 1 0.000036
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.f( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.328822 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.f( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.241303 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.e( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.088773 1 0.000021
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.e( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.336231 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.e( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.247982 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.17( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.096159 1 0.000017
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.17( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.343606 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.17( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.249869 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.6( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.103497 1 0.000015
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.6( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.350959 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.6( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.261325 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.9( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.110996 1 0.000017
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.9( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.358495 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.9( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.269404 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.c( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.118273 1 0.000014
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.c( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.365703 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.c( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.278249 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.14( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.125661 1 0.000021
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.14( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.373381 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.14( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.290258 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.14( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.133009 1 0.000015
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.14( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.380479 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.14( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.290726 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.4( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.140341 1 0.000019
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.4( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.387850 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.4( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.298655 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1a( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.147781 1 0.000015
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1a( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.395185 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1a( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.301985 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.10( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.155100 1 0.000014
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.10( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.402433 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.10( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.316582 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.10( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.162511 1 0.000018
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.10( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.409837 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.10( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.318856 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.e( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.169867 1 0.000014
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.e( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.417210 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.e( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.329976 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.177203 1 0.000022
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.425186 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.333061 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.184338 1 0.000035
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.430802 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.344482 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 52 heartbeat osd_stat(store_statfs(0x4fcee6000/0x0/0x4ffc00000, data 0xb1113/0x144000, compress 0x0/0x0/0x0, omap 0x70a5, meta 0x2bc8f5b), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.191667 1 0.000028
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.438148 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.350663 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.199048 1 0.000018
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.445565 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.358403 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.206393 1 0.000018
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.452925 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.370057 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.213775 1 0.000015
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.460345 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.373978 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 2138112 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.221199 1 0.000015
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.467773 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.379218 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.228579 1 0.000016
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.475176 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.392691 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.235935 1 0.000010
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.482552 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.392335 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.243271 1 0.000025
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.490018 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.405266 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.576270 3 0.000020
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.576286 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000030 1 0.000034
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 pct=0'0 crt=43'2 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.332083 2 0.000059
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 pct=0'0 crt=43'2 lcod 0'0 active mbc={}] exit Started/ToDelete 0.424068 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 pct=0'0 crt=43'2 lcod 0'0 active mbc={}] exit Started 1.492041 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.339557 2 0.000041
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ToDelete 0.360821 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started 1.501171 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.288211 2 0.000090
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ToDelete 0.288284 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started 1.514102 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.103152 2 0.000060
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ToDelete 0.103209 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started 1.587491 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.677610 3 0.002062
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.679698 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000059 1 0.000071
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.014242 2 0.000062
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ToDelete 0.014325 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started 1.602757 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 52 handle_osd_map epochs [53,53], i have 52, src has [1,53]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993105 4 0.000040
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.993170 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993316 4 0.000031
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.993367 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993474 4 0.000032
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.993525 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993496 4 0.000034
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.993548 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993772 4 0.000038
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.993830 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992915 4 0.001060
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.994022 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994254 4 0.000047
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.994337 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994703 4 0.000031
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.994753 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994514 4 0.000095
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.994630 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994776 4 0.000045
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.994838 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995206 4 0.000038
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.995259 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 43'551 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994676 4 0.000778
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.995600 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995542 4 0.000068
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.995637 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996133 4 0.000044
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.996202 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994665 4 0.000985
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.996590 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994995 4 0.000918
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.995060 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 53 handle_osd_map epochs [53,53], i have 53, src has [1,53]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.333954 5 0.000162
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.333077 5 0.000082
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000062 1 0.000030
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec 13 02:36:29 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/203291855' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.334830 5 0.000263
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000999 1 0.000018
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.333883 5 0.000208
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/Activating 0.334871 5 0.000207
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/Activating 0.335069 5 0.000112
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/Activating 0.334962 5 0.000184
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.334687 5 0.000115
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.334419 5 0.000066
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.334468 5 0.000082
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.334955 5 0.000064
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.334233 5 0.000074
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.335468 5 0.000321
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.334609 5 0.000085
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.334668 5 0.000108
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.335415 5 0.000155
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.042456 2 0.000016
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.043605 1 0.000085
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000287 1 0.000015
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052536 2 0.000145
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.095593 1 0.000017
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000444 1 0.000138
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.031187 2 0.000075
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.127162 1 0.000011
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000312 1 0.000050
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 2015232 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.038373 2 0.000060
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.165909 1 0.000021
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000330 1 0.000044
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.059552 2 0.000031
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.225865 1 0.000015
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000296 1 0.000058
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.059557 2 0.000084
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.285812 1 0.000019
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000356 1 0.000033
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.345758 1 0.000243
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.059634 2 0.000056
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000319 1 0.000052
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.066567 2 0.000089
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.412841 1 0.000023
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000332 1 0.000047
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.038276 2 0.000063
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.451528 1 0.000014
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000419 1 0.000067
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.490319 1 0.000009
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.038409 2 0.000066
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000383 1 0.000083
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.031232 2 0.000055
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.522018 1 0.000012
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000346 1 0.000046
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.024296 2 0.000041
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.546726 1 0.000008
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000395 1 0.000034
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.578437 1 0.000013
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.031328 2 0.000033
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000331 1 0.000030
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.066634 2 0.000029
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 53'553 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.645448 1 0.000013
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 53'553 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 53'553 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000533 1 0.000033
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 53'553 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 53 handle_osd_map epochs [53,54], i have 53, src has [1,54]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.086425781s of 10.170964241s, submitted: 539
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.092938 1 0.000291
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.007001 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.000195 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.000285 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.576459 1 0.000069
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327981949s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.118011475s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327939034s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118011475s@ mbc={}] exit Reset 0.000109 1 0.000321
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327939034s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118011475s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327939034s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118011475s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327939034s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118011475s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327939034s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118011475s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327939034s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118011475s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.386212 1 0.000139
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.007216 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.000775 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.000790 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327603340s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117866516s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327571869s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117866516s@ mbc={}] exit Reset 0.000047 1 0.000064
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327571869s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117866516s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327571869s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117866516s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327571869s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117866516s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327571869s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117866516s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327571869s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117866516s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.326330 1 0.000223
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.007270 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.001109 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.001121 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327441216s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117881775s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327405930s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117881775s@ mbc={}] exit Reset 0.000048 1 0.000062
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327405930s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117881775s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327405930s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117881775s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327405930s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117881775s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327405930s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117881775s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327405930s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117881775s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.544511 1 0.000173
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.007733 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.001137 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.007693 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.001278 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.001171 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326743126s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117576599s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326668739s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117576599s@ mbc={}] exit Reset 0.000167 1 0.001053
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326668739s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117576599s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326668739s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117576599s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326668739s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117576599s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326668739s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117576599s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326668739s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117576599s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.150692 1 0.000067
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.007696 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.002042 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.002058 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326932907s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117958069s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326913834s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] exit Reset 0.000032 1 0.000048
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326913834s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326913834s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326913834s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326913834s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326913834s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.260221 1 0.000133
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.007807 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.002568 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.002581 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326637268s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117897034s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326610565s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117897034s@ mbc={}] exit Reset 0.000038 1 0.000053
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326610565s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117897034s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326610565s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117897034s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326610565s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117897034s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326610565s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117897034s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326610565s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117897034s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.001310 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323873520s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.116912842s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.446802 1 0.000076
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.009899 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.003935 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.003952 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324602127s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117736816s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324528694s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117736816s@ mbc={}] exit Reset 0.000089 1 0.002402
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324528694s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117736816s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324528694s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117736816s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324528694s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117736816s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324528694s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117736816s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324528694s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117736816s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.029552 1 0.000183
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.009680 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.004948 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.004961 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324680328s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.118041992s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324645996s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118041992s@ mbc={}] exit Reset 0.000046 1 0.000061
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324645996s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118041992s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324645996s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118041992s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324645996s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118041992s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324645996s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118041992s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324645996s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118041992s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 54 handle_osd_map epochs [54,54], i have 54, src has [1,54]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.506827 1 0.000100
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323390007s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116912842s@ mbc={}] exit Reset 0.000529 1 0.003572
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.009348 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.004455 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.004478 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323390007s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116912842s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323390007s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116912842s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323390007s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116912842s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 54 handle_osd_map epochs [54,54], i have 54, src has [1,54]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324029922s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117660522s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323875427s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117660522s@ mbc={}] exit Reset 0.000189 1 0.002904
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323390007s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116912842s@ mbc={}] exit Start 0.000080 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323390007s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116912842s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323875427s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117660522s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323875427s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117660522s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323875427s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117660522s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323875427s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117660522s@ mbc={}] exit Start 0.000093 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323875427s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117660522s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.185243 1 0.000223
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.010369 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.005993 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.006013 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323761940s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117958069s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.224417 1 0.000101
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.010423 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.006073 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.006091 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323680878s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117927551s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323638916s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117927551s@ mbc={}] exit Reset 0.000066 1 0.000089
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323638916s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117927551s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323638916s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117927551s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323638916s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117927551s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323638916s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117927551s@ mbc={}] exit Start 0.000009 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323638916s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117927551s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323557854s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] exit Reset 0.000239 1 0.000438
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323557854s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323557854s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323557854s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323557854s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] exit Start 0.000009 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323557854s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.129458 1 0.000094
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.010479 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.633818 1 0.000068
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.010643 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.007099 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.006878 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.007160 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.006932 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.322302818s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.116996765s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323290825s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.118003845s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323190689s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118003845s@ mbc={}] exit Reset 0.000184 1 0.000323
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.322153091s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116996765s@ mbc={}] exit Reset 0.000176 1 0.000443
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.322153091s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116996765s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323190689s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118003845s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.322153091s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116996765s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323190689s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118003845s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323190689s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118003845s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.322153091s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116996765s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323190689s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118003845s@ mbc={}] exit Start 0.000385 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.322153091s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116996765s@ mbc={}] exit Start 0.000413 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.322153091s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116996765s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323190689s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118003845s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 53'553 mlcod 53'553 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.074670 5 0.000036
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 53'553 mlcod 53'553 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.720099 4 0.000148
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000473 1 0.000039
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052330 2 0.000074
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 2187264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 54 handle_osd_map epochs [55,55], i have 54, src has [1,55]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.905892 1 0.000064
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.014385 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.009026 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.009050 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.320955276s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 active pruub 119.118865967s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.320901871s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118865967s@ mbc={}] exit Reset 0.000080 1 0.000131
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.320901871s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118865967s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.320901871s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118865967s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.320901871s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118865967s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.320901871s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118865967s@ mbc={}] exit Start 0.000023 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.320901871s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118865967s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 53'553 mlcod 53'553 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.959013 1 0.000077
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 53'553 mlcod 53'553 active+remapped mbc={255={}}] exit Started/Primary/Active 2.014522 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 53'553 mlcod 53'553 active+remapped mbc={255={}}] exit Started/Primary 3.009519 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 53'553 mlcod 53'553 active+remapped mbc={255={}}] exit Started 3.009552 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 53'553 mlcod 53'553 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.319732666s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 active pruub 119.118080139s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.319633484s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 unknown NOTIFY pruub 119.118080139s@ mbc={}] exit Reset 0.000132 1 0.000380
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.319633484s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 unknown NOTIFY pruub 119.118080139s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.319633484s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 unknown NOTIFY pruub 119.118080139s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.319633484s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 unknown NOTIFY pruub 119.118080139s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.319633484s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 unknown NOTIFY pruub 119.118080139s@ mbc={}] exit Start 0.000106 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.319633484s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 unknown NOTIFY pruub 119.118080139s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 55 handle_osd_map epochs [55,55], i have 55, src has [1,55]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009017 7 0.000038
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.012167 7 0.000034
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.011180 7 0.000032
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.008006 7 0.000052
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.012012 7 0.000443
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.011592 7 0.000055
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009299 7 0.000038
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000091 1 0.000050
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.008221 7 0.000044
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.007177 7 0.000651
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000132 1 0.000025
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000460 1 0.000015
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.007708 7 0.000539
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009316 7 0.000327
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009085 7 0.000333
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.012104 7 0.000033
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013130 7 0.000052
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000775 1 0.000077
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000834 1 0.000012
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000851 1 0.000020
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000945 1 0.000033
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000920 1 0.000096
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000645 1 0.000405
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000522 1 0.000047
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000556 1 0.000021
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000547 1 0.000048
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000580 1 0.000048
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000590 1 0.000048
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.097529 2 0.000195
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.097685 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.106756 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.223121 2 0.000339
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.223524 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.235716 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.289672 2 0.000170
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.290171 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.301370 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.326204 2 0.000088
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.327016 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.335110 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.385467 2 0.000116
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.386332 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.398361 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.414992 2 0.000177
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.415902 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.427522 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 2998272 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765829 data_alloc: 218103808 data_used: 6167
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.474172 2 0.000090
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.475152 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.484491 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.496276 2 0.000064
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.497223 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.504954 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.533366 2 0.000101
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.534118 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.542659 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.577932 2 0.000183
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.578522 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.586734 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.629580 2 0.000087
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.630166 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.639706 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.666566 2 0.000080
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.667159 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.676424 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.696100 2 0.000086
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.696724 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.708876 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.725663 2 0.000096
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.726304 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.739467 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 55 heartbeat osd_stat(store_statfs(0x4fcefc000/0x0/0x4ffc00000, data 0xb6bc9/0x130000, compress 0x0/0x0/0x0, omap 0x7ae9, meta 0x2bc8517), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 unknown NOTIFY mbc={}] exit Started/Stray 1.455963 6 0.000266
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.456680 6 0.000102
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000690 2 0.000082
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000698 2 0.000021
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 3096576 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 DELETING pi=[47,55)/1 crt=53'554 lcod 53'553 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.075239 2 0.000242
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=53'554 lcod 53'553 unknown NOTIFY mbc={}] exit Started/ToDelete 0.076004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=53'554 lcod 53'553 unknown NOTIFY mbc={}] exit Started 1.532140 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 DELETING pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.127108 2 0.000150
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.127849 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.584576 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 3096576 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 3096576 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 3096576 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 56 handle_osd_map epochs [57,57], i have 56, src has [1,57]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 3096576 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 690212 data_alloc: 218103808 data_used: 5223
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 57 heartbeat osd_stat(store_statfs(0x4fcef9000/0x0/0x4ffc00000, data 0xb9e04/0x12f000, compress 0x0/0x0/0x0, omap 0x7fe9, meta 0x2bc8017), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 57 handle_osd_map epochs [58,58], i have 57, src has [1,58]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 3072000 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 3072000 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 3063808 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 58 handle_osd_map epochs [59,59], i have 58, src has [1,59]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 3055616 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 59 handle_osd_map epochs [59,60], i have 59, src has [1,60]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.015730858s of 11.047432899s, submitted: 87
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 3006464 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 698528 data_alloc: 218103808 data_used: 5223
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 3006464 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 60 handle_osd_map epochs [60,61], i have 60, src has [1,61]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 23.112061 39 0.000079
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 23.114811 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 23.114960 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 23.115003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887648582s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 active pruub 125.730354309s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 23.112179 39 0.000064
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 23.114523 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 23.114571 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 23.114597 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887540817s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730354309s@ mbc={}] exit Reset 0.000136 1 0.000405
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887540817s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730354309s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887540817s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730354309s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887540817s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730354309s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887540817s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730354309s@ mbc={}] exit Start 0.000010 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887540817s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730354309s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887639046s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 active pruub 125.730476379s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887612343s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730476379s@ mbc={}] exit Reset 0.000061 1 0.000111
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887612343s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730476379s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887612343s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730476379s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887612343s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730476379s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887612343s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730476379s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887612343s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730476379s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 23.112549 39 0.000061
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 23.114748 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 23.114778 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 23.114792 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887332916s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 active pruub 125.730545044s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887315750s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730545044s@ mbc={}] exit Reset 0.000030 1 0.000058
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887315750s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730545044s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887315750s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730545044s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887315750s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730545044s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887315750s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730545044s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887315750s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730545044s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 23.112727 39 0.000065
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 23.114741 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 23.114773 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 23.114787 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887020111s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 active pruub 125.730529785s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887008667s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730529785s@ mbc={}] exit Reset 0.000026 1 0.000089
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887008667s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730529785s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887008667s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730529785s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887008667s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730529785s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887008667s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730529785s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887008667s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730529785s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 61 handle_osd_map epochs [60,61], i have 61, src has [1,61]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 61 heartbeat osd_stat(store_statfs(0x4fcef0000/0x0/0x4ffc00000, data 0xbf0d8/0x138000, compress 0x0/0x0/0x0, omap 0x87af, meta 0x2bc7851), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 61 handle_osd_map epochs [62,62], i have 61, src has [1,62]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.208160 3 0.000022
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.208811 3 0.000041
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.208847 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.208212 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.208544 3 0.000023
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.208561 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.208994 3 0.000038
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.209016 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000028 1 0.000036
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000025 1 0.000032
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000156 1 0.000170
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000028 1 0.000048
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000021 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000393 1 0.000463
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000039 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000580 1 0.000606
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000030 1 0.000132
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000027 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000058 1 0.000132
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000019 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 2990080 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 62 handle_osd_map epochs [62,63], i have 62, src has [1,63]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999981 4 0.000046
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 63 handle_osd_map epochs [63,63], i have 63, src has [1,63]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000361 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999660 4 0.000208
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.999943 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000385 4 0.000057
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000693 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000409 4 0.000176
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000797 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.001325 5 0.000893
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000059 1 0.000062
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000688 1 0.000035
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.001900 5 0.000984
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/Activating 0.002809 5 0.000552
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.002092 5 0.000645
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.028370 2 0.000014
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.028223 1 0.000043
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000333 1 0.000124
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.045309 2 0.000029
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.073549 1 0.000093
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000330 1 0.000036
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.059595 2 0.000054
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.133536 1 0.000064
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000376 1 0.000037
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.045358 2 0.000043
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 2981888 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.973185 1 0.000204
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.004047 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.004475 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.004505 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997287750s) [2] async=[2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 active pruub 134.053726196s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.867717 1 0.000095
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.004211 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.004177 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.004257 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997888565s) [2] async=[2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 active pruub 134.054489136s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997838974s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054489136s@ mbc={}] exit Reset 0.000069 1 0.000091
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997838974s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054489136s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997838974s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054489136s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997838974s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054489136s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997838974s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054489136s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.927854 1 0.000075
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.003961 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.004736 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.004752 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997774124s) [2] async=[2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 active pruub 134.054504395s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997743607s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054504395s@ mbc={}] exit Reset 0.000046 1 0.000065
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997743607s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054504395s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997743607s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054504395s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997743607s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054504395s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997743607s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054504395s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997743607s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054504395s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.822257 1 0.000053
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.003775 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.004670 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.004741 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.998094559s) [2] async=[2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 active pruub 134.055038452s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.998066902s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.055038452s@ mbc={}] exit Reset 0.000040 1 0.000055
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.998066902s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.055038452s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.998066902s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.055038452s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.998066902s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.055038452s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.998066902s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.055038452s@ mbc={}] exit Start 0.000024 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.998066902s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.055038452s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997838974s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054489136s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.994911194s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.053726196s@ mbc={}] exit Reset 0.002416 1 0.002497
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.994911194s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.053726196s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.994911194s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.053726196s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.994911194s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.053726196s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.994911194s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.053726196s@ mbc={}] exit Start 0.000090 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.994911194s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.053726196s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 64 handle_osd_map epochs [64,64], i have 64, src has [1,64]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 2973696 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fcee4000/0x0/0x4ffc00000, data 0xc5e36/0x144000, compress 0x0/0x0/0x0, omap 0x91e3, meta 0x2bc6e1d), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 2891776 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 714440 data_alloc: 218103808 data_used: 5223
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.594760 6 0.000065
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.595186 6 0.001628
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.592906 6 0.000203
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.595168 6 0.000042
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000632 2 0.000047
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000582 2 0.000015
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000610 2 0.000021
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000577 2 0.000017
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] lb MIN local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 DELETING pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.079113 2 0.000099
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] lb MIN local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.079803 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] lb MIN local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.674628 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] lb MIN local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 DELETING pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.138243 2 0.000087
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] lb MIN local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.138859 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] lb MIN local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.734069 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] lb MIN local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 DELETING pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.167940 2 0.000065
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] lb MIN local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.168581 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] lb MIN local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.761626 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] lb MIN local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 DELETING pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.212152 2 0.000081
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] lb MIN local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.212778 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] lb MIN local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.807987 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 3112960 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 28.225575 55 0.000080
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 28.227833 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 28.227867 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 28.227885 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.774515152s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 active pruub 133.730743408s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.774483681s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730743408s@ mbc={}] exit Reset 0.000057 1 0.000092
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.774483681s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730743408s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.774483681s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730743408s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.774483681s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730743408s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.774483681s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730743408s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.774483681s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730743408s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 28.225661 55 0.000084
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 28.227941 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 28.227976 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 28.227993 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.774012566s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 active pruub 133.730667114s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.773996353s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730667114s@ mbc={}] exit Reset 0.000032 1 0.000317
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.773996353s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730667114s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.773996353s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730667114s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.773996353s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730667114s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.773996353s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730667114s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.773996353s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730667114s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 66 handle_osd_map epochs [66,67], i have 66, src has [1,67]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.093958 3 0.000352
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.094289 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000048 1 0.000068
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.094026 3 0.000023
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.094050 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000031 1 0.000046
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000716 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004060 2 0.000029
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000035 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 67 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003766 2 0.000749
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000019 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 3088384 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 68 handle_osd_map epochs [67,68], i have 68, src has [1,68]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997613 3 0.000057
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.001436 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998112 3 0.000127
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.002294 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 68 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.003261 5 0.000149
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.003521 5 0.000158
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000063 1 0.000026
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000370 1 0.000015
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.042958 1 0.000064
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.042547 2 0.000033
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000408 1 0.000039
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052408 2 0.000038
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 3047424 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fceda000/0x0/0x4ffc00000, data 0xcc904/0x14a000, compress 0x0/0x0/0x0, omap 0x9bf8, meta 0x2bc6408), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.922536 1 0.000070
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.022017 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.023478 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.024227 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.975432 1 0.000138
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.021797 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.024102 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.024126 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.981441498s) [2] async=[2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 active pruub 139.056365967s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.981397629s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056365967s@ mbc={}] exit Reset 0.000105 1 0.000178
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.981397629s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056365967s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.981397629s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056365967s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.981397629s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056365967s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.981397629s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056365967s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.981397629s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056365967s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.981289864s) [2] async=[2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 active pruub 139.056335449s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.980957031s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056335449s@ mbc={}] exit Reset 0.000494 1 0.000527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.980957031s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056335449s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.980957031s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056335449s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.980957031s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056335449s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.980957031s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056335449s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.980957031s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056335449s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 69 handle_osd_map epochs [69,69], i have 69, src has [1,69]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 3006464 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 3006464 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 688308 data_alloc: 218103808 data_used: 4393
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.804404259s of 10.846774101s, submitted: 74
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.608874 6 0.000075
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.609287 6 0.000059
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000423 1 0.000062
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000945 2 0.000079
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] lb MIN local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 DELETING pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.060206 3 0.000124
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] lb MIN local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.060681 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] lb MIN local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.670036 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] lb MIN local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 DELETING pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.104130 2 0.000093
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] lb MIN local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.105121 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] lb MIN local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.714042 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 70 heartbeat osd_stat(store_statfs(0x4fcedd000/0x0/0x4ffc00000, data 0xce37d/0x14d000, compress 0x0/0x0/0x0, omap 0x9e91, meta 0x2bc616f), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 3112960 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 3112960 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 70 heartbeat osd_stat(store_statfs(0x4fcedc000/0x0/0x4ffc00000, data 0xcfbbc/0x14c000, compress 0x0/0x0/0x0, omap 0xa12c, meta 0x2bc5ed4), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 3104768 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 3104768 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 3096576 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 668873 data_alloc: 218103808 data_used: 3865
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 70 handle_osd_map epochs [71,72], i have 70, src has [1,72]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 3072000 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 3063808 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 72 heartbeat osd_stat(store_statfs(0x4fced8000/0x0/0x4ffc00000, data 0xd32f4/0x152000, compress 0x0/0x0/0x0, omap 0xa387, meta 0x2bc5c79), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 39.500612 75 0.000351
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 39.503075 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 39.503110 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 39.503131 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499458313s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 active pruub 141.730636597s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499427795s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730636597s@ mbc={}] exit Reset 0.000066 1 0.000094
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499427795s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730636597s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499427795s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730636597s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499427795s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730636597s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499427795s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730636597s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499427795s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730636597s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 39.500667 75 0.000135
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 39.502576 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 39.502609 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 39.502625 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499320030s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 active pruub 141.730758667s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499304771s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730758667s@ mbc={}] exit Reset 0.000031 1 0.000057
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499304771s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730758667s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499304771s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730758667s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499304771s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730758667s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499304771s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730758667s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499304771s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730758667s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 3031040 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.888488 3 0.000035
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.888595 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000194 1 0.000297
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000046 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000036 1 0.000143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.889228 3 0.000025
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.889254 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000051 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000035 1 0.000054
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000061 1 0.000123
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 3014656 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 74 handle_osd_map epochs [74,75], i have 74, src has [1,75]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 75 handle_osd_map epochs [74,75], i have 75, src has [1,75]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002827 4 0.000053
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.002952 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003599 4 0.000149
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003794 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.035962 5 0.000480
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000043 1 0.000047
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.035983 5 0.000184
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000797 1 0.000010
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035528 2 0.000061
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.036073 1 0.000034
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000304 1 0.000051
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.066501 2 0.000051
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 2990080 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 686792 data_alloc: 218103808 data_used: 3865
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 75 heartbeat osd_stat(store_statfs(0x4fced0000/0x0/0x4ffc00000, data 0xd6943/0x158000, compress 0x0/0x0/0x0, omap 0xa8c5, meta 0x2bc573b), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.037201881s of 10.051774025s, submitted: 64
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.931866 1 0.000154
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.004682 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.007647 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.007724 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031506538s) [2] async=[2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 active pruub 151.159759521s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031457901s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] exit Reset 0.000074 1 0.000107
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031457901s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031457901s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031457901s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031457901s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] exit Start 0.000008 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031457901s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.865321 1 0.000074
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.004362 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.008178 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.008258 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031195641s) [2] async=[2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 active pruub 151.159759521s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031086922s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] exit Reset 0.000126 1 0.000150
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031086922s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031086922s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031086922s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031086922s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] exit Start 0.000008 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031086922s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 76 handle_osd_map epochs [76,76], i have 76, src has [1,76]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 2949120 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.007325 7 0.000059
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.007839 7 0.000049
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000059 1 0.000120
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000050 1 0.000032
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] lb MIN local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 DELETING pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.075016 2 0.000170
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] lb MIN local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.075174 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] lb MIN local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.082566 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] lb MIN local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 DELETING pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.111936 2 0.000093
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] lb MIN local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.112026 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] lb MIN local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.119901 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 2924544 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 2924544 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 77 heartbeat osd_stat(store_statfs(0x4fceca000/0x0/0x4ffc00000, data 0xdb83f/0x15e000, compress 0x0/0x0/0x0, omap 0xb06a, meta 0x2bc4f96), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 2924544 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 77 heartbeat osd_stat(store_statfs(0x4fceca000/0x0/0x4ffc00000, data 0xdb83f/0x15e000, compress 0x0/0x0/0x0, omap 0xb06a, meta 0x2bc4f96), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 2908160 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 679412 data_alloc: 218103808 data_used: 4032
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 2899968 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 2891776 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 2891776 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 78 handle_osd_map epochs [80,80], i have 78, src has [1,80]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 78 handle_osd_map epochs [79,80], i have 78, src has [1,80]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 2883584 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 2875392 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 690541 data_alloc: 218103808 data_used: 4032
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 80 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xe0b13/0x167000, compress 0x0/0x0/0x0, omap 0xb571, meta 0x2bc4a8f), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.277196884s of 10.296418190s, submitted: 33
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 2875392 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 2867200 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 2859008 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 82 heartbeat osd_stat(store_statfs(0x4fcebb000/0x0/0x4ffc00000, data 0xe424b/0x16d000, compress 0x0/0x0/0x0, omap 0xba7c, meta 0x2bc4584), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 2859008 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 2850816 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 707402 data_alloc: 218103808 data_used: 4617
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 83 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xe5de7/0x170000, compress 0x0/0x0/0x0, omap 0xbd29, meta 0x2bc42d7), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 83 handle_osd_map epochs [84,84], i have 84, src has [1,84]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 2850816 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 2850816 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.d scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.d scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 84 handle_osd_map epochs [85,87], i have 84, src has [1,87]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15(unlocked)] enter Initial
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=0 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000084 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=0 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000020 1 0.000041
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000785 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000249 1 0.001136
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000043 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.001661 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 2744320 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 87 handle_osd_map epochs [87,88], i have 88, src has [1,88]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.003131 2 0.001444
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.004853 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.005914 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000044 1 0.000070
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 2875392 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 88 heartbeat osd_stat(store_statfs(0x4fceac000/0x0/0x4ffc00000, data 0xeca2d/0x17c000, compress 0x0/0x0/0x0, omap 0xc23b, meta 0x2bc3dc5), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 89 pg[9.15( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.002741 6 0.000029
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 89 pg[9.15( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 89 pg[9.15( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 89 pg[9.15( v 43'551 lc 42'37 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001496 3 0.000122
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 89 pg[9.15( v 43'551 lc 42'37 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 89 pg[9.15( v 43'551 lc 42'37 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000029 1 0.000043
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 89 pg[9.15( v 43'551 lc 42'37 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 89 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.028640 1 0.000047
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 89 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 2850816 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734458 data_alloc: 218103808 data_used: 5202
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.030361176s of 10.050309181s, submitted: 56
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.983411 1 0.000022
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 1.013683 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.016474 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000327 1 0.000404
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000085 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000041 1 0.000189
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000777 3 0.000077
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 2842624 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002275 2 0.000062
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003179 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=90/91 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=90/91 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=90/91 n=6 ec=47/37 lis/c=90/54 les/c/f=91/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001043 3 0.000089
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=90/91 n=6 ec=47/37 lis/c=90/54 les/c/f=91/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=90/91 n=6 ec=47/37 lis/c=90/54 les/c/f=91/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=90/91 n=6 ec=47/37 lis/c=90/54 les/c/f=91/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 91 handle_osd_map epochs [91,91], i have 91, src has [1,91]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 78987264 unmapped: 1744896 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 78987264 unmapped: 1744896 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 91 heartbeat osd_stat(store_statfs(0x4fcea1000/0x0/0x4ffc00000, data 0xf3437/0x189000, compress 0x0/0x0/0x0, omap 0xcc6b, meta 0x2bc3395), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 78995456 unmapped: 1736704 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79003648 unmapped: 1728512 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 745041 data_alloc: 218103808 data_used: 5202
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 92 handle_osd_map epochs [93,94], i have 92, src has [1,94]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79011840 unmapped: 1720320 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79020032 unmapped: 1712128 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 1662976 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 94 handle_osd_map epochs [95,96], i have 94, src has [1,96]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 96 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xfbc19/0x198000, compress 0x0/0x0/0x0, omap 0xd3ef, meta 0x2bc2c11), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766127 data_alloc: 218103808 data_used: 6657
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.027341843s of 10.044580460s, submitted: 44
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79134720 unmapped: 1597440 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79142912 unmapped: 2637824 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79142912 unmapped: 2637824 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.17 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.17 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79216640 unmapped: 2564096 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 100 heartbeat osd_stat(store_statfs(0x4fce86000/0x0/0x4ffc00000, data 0x102857/0x1a4000, compress 0x0/0x0/0x0, omap 0xde39, meta 0x2bc21c7), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 100 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79224832 unmapped: 2555904 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 782436 data_alloc: 218103808 data_used: 6657
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79224832 unmapped: 2555904 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79224832 unmapped: 2555904 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 101 heartbeat osd_stat(store_statfs(0x4fce81000/0x0/0x4ffc00000, data 0x1042a6/0x1a7000, compress 0x0/0x0/0x0, omap 0xe0a1, meta 0x2bc1f5f), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 2547712 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 2547712 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 2539520 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 782436 data_alloc: 218103808 data_used: 6657
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 101 heartbeat osd_stat(store_statfs(0x4fce81000/0x0/0x4ffc00000, data 0x1042a6/0x1a7000, compress 0x0/0x0/0x0, omap 0xe0a1, meta 0x2bc1f5f), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 101 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 2523136 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.514952660s of 10.520758629s, submitted: 7
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79265792 unmapped: 2514944 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 2498560 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 103 handle_osd_map epochs [103,104], i have 104, src has [1,104]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 2744320 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 2744320 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 795945 data_alloc: 218103808 data_used: 6657
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 105 heartbeat osd_stat(store_statfs(0x4fce75000/0x0/0x4ffc00000, data 0x10b01c/0x1b3000, compress 0x0/0x0/0x0, omap 0xeafd, meta 0x2bc1503), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 105 heartbeat osd_stat(store_statfs(0x4fce75000/0x0/0x4ffc00000, data 0x10b01c/0x1b3000, compress 0x0/0x0/0x0, omap 0xeafd, meta 0x2bc1503), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 105 handle_osd_map epochs [106,106], i have 106, src has [1,106]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 2686976 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fce74000/0x0/0x4ffc00000, data 0x10cbd5/0x1b6000, compress 0x0/0x0/0x0, omap 0xed67, meta 0x2bc1299), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 2686976 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 2678784 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 106 handle_osd_map epochs [107,108], i have 106, src has [1,108]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 108 heartbeat osd_stat(store_statfs(0x4fce74000/0x0/0x4ffc00000, data 0x10cbd5/0x1b6000, compress 0x0/0x0/0x0, omap 0xed67, meta 0x2bc1299), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 108 handle_osd_map epochs [109,109], i have 109, src has [1,109]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 2670592 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 109 handle_osd_map epochs [109,110], i have 109, src has [1,110]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 78995456 unmapped: 2785280 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810967 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 78995456 unmapped: 2785280 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 78995456 unmapped: 2785280 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79003648 unmapped: 2777088 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.081542015s of 12.090872765s, submitted: 13
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f(unlocked)] enter Initial
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=0 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000065 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=0 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000024
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000495 1 0.000032
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000028 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000552 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 111 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 2711552 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 111 heartbeat osd_stat(store_statfs(0x4fce65000/0x0/0x4ffc00000, data 0x11510d/0x1c5000, compress 0x0/0x0/0x0, omap 0xf772, meta 0x2bc088e), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 111 handle_osd_map epochs [111,112], i have 111, src has [1,112]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 111 handle_osd_map epochs [112,112], i have 112, src has [1,112]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.005684 2 0.000071
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.006259 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.006277 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000071 1 0.000098
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 112 handle_osd_map epochs [112,112], i have 112, src has [1,112]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 2703360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 818978 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 113 pg[9.1f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.346244 5 0.000038
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 113 pg[9.1f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 113 pg[9.1f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 113 pg[9.1f( v 43'551 lc 42'133 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001015 4 0.000607
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 113 pg[9.1f( v 43'551 lc 42'133 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 113 pg[9.1f( v 43'551 lc 42'133 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000047 1 0.000047
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 113 pg[9.1f( v 43'551 lc 42'133 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.039869 1 0.000025
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0x116b8e/0x1c8000, compress 0x0/0x0/0x0, omap 0xfa41, meta 0x2bc05bf), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 2703360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.624326 1 0.000022
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.665343 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.011754 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000039 1 0.000063
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000021 1 0.000025
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001107 3 0.000026
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 2695168 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fce57000/0x0/0x4ffc00000, data 0x11a0c8/0x1cf000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 114 handle_osd_map epochs [115,115], i have 114, src has [1,115]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 114 handle_osd_map epochs [114,115], i have 115, src has [1,115]
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999701 2 0.000060
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000870 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=114/115 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=114/115 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=114/115 n=6 ec=47/37 lis/c=114/67 les/c/f=115/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001260 4 0.000364
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=114/115 n=6 ec=47/37 lis/c=114/67 les/c/f=115/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=114/115 n=6 ec=47/37 lis/c=114/67 les/c/f=115/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=114/115 n=6 ec=47/37 lis/c=114/67 les/c/f=115/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 2678784 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 2670592 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 2670592 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 840035 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 2670592 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79118336 unmapped: 2662400 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79118336 unmapped: 2662400 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79126528 unmapped: 2654208 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79126528 unmapped: 2654208 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 840035 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.232923508s of 12.249648094s, submitted: 28
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79126528 unmapped: 2654208 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79134720 unmapped: 2646016 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79134720 unmapped: 2646016 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79142912 unmapped: 2637824 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79142912 unmapped: 2637824 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848965 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79142912 unmapped: 2637824 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79151104 unmapped: 2629632 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79151104 unmapped: 2629632 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79151104 unmapped: 2629632 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79159296 unmapped: 2621440 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848965 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79167488 unmapped: 2613248 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79175680 unmapped: 2605056 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79175680 unmapped: 2605056 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.060271263s of 13.066132545s, submitted: 8
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79183872 unmapped: 2596864 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79183872 unmapped: 2596864 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 851378 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79192064 unmapped: 2588672 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79192064 unmapped: 2588672 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79192064 unmapped: 2588672 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79208448 unmapped: 2572288 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79224832 unmapped: 2555904 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 856204 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 2547712 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 2547712 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.a scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.a scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 2539520 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 2539520 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.775984764s of 10.782183647s, submitted: 10
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 2523136 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 863437 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 2523136 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 2523136 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79265792 unmapped: 2514944 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79265792 unmapped: 2514944 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 2506752 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 868261 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 2498560 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 2482176 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 2465792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 2465792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 2457600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 875494 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 2457600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79331328 unmapped: 2449408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79331328 unmapped: 2449408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79331328 unmapped: 2449408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 2441216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 875494 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 2441216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 2433024 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.874313354s of 17.881334305s, submitted: 12
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 2433024 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 2433024 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.c scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.c scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 2408448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 882727 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 2408448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79380480 unmapped: 2400256 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 2392064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79396864 unmapped: 2383872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 2367488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 894790 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 2367488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 2359296 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 2359296 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.940398216s of 10.950827599s, submitted: 16
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 2334720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 2326528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899618 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 2310144 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 2310144 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 2310144 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 2301952 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 2301952 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904446 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 2301952 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 2293760 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 2293760 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 2285568 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.080490112s of 11.087811470s, submitted: 10
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79511552 unmapped: 2269184 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909270 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 2260992 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 2260992 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.a scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.a scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79527936 unmapped: 2252800 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79527936 unmapped: 2252800 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 2244608 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916503 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 2244608 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 2244608 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.c scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.c scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 2236416 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 2236416 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 2220032 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918916 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 2220032 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.a scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.815450668s of 12.822014809s, submitted: 10
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.a scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 2211840 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 2203648 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 2195456 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 2187264 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923740 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 2187264 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79601664 unmapped: 2179072 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79601664 unmapped: 2179072 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 2170880 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79618048 unmapped: 2162688 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928564 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79618048 unmapped: 2162688 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 2154496 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.193034172s of 10.200374603s, submitted: 10
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 2154496 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79634432 unmapped: 2146304 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79634432 unmapped: 2146304 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935801 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79634432 unmapped: 2146304 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 2138112 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 2138112 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 2129920 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 2129920 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938216 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 2113536 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 2088960 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 2088960 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 2080768 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 2072576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940629 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.985782623s of 12.992125511s, submitted: 8
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 2064384 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 2064384 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 2048000 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 2048000 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 2048000 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943042 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 2039808 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 2039808 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 2031616 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 2031616 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 2031616 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947872 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 2023424 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.039779663s of 11.043544769s, submitted: 6
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 2023424 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79765504 unmapped: 2015232 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79765504 unmapped: 2015232 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79765504 unmapped: 2015232 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950287 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 2007040 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 2007040 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 1990656 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 1990656 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 1990656 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955117 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 1966080 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 1966080 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.985705376s of 10.989761353s, submitted: 6
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 1957888 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 1957888 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 1957888 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957530 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 1949696 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 1949696 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 1941504 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 1925120 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 1925120 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959943 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.f scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.f scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 1916928 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 1916928 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 1908736 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.b scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.905800819s of 11.910030365s, submitted: 6
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.b scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 1908736 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 1892352 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967184 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 1892352 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79896576 unmapped: 1884160 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 1875968 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 1875968 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 1867776 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969599 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 1867776 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79921152 unmapped: 1859584 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79921152 unmapped: 1859584 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79921152 unmapped: 1859584 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79929344 unmapped: 1851392 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972012 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.826974869s of 11.832626343s, submitted: 8
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 1835008 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 1835008 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 1818624 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 1810432 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 1802240 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976836 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 1802240 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 1802240 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 1777664 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 1777664 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.a scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.a scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 1761280 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981658 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.098447800s of 10.105239868s, submitted: 8
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 1761280 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 1761280 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 1753088 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 1753088 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1744896 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986482 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1744896 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 1712128 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 1703936 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 1703936 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 1695744 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 1695744 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 1687552 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 1687552 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 1687552 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 1679360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 1679360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 1679360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 1671168 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 1662976 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 1654784 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 1654784 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 1646592 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 1646592 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 1646592 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1638400 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1638400 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 1630208 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1622016 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1622016 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1613824 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1613824 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1605632 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1605632 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1597440 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1597440 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1597440 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1589248 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Dec 13 02:36:29 np0005558317 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2826344808' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1589248 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 1581056 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1572864 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1572864 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1564672 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1564672 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 1556480 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 1556480 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1548288 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1548288 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1548288 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1540096 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1540096 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1540096 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1531904 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 1523712 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1515520 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1515520 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1515520 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 1507328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 1507328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 1499136 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 1499136 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 1499136 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1490944 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1490944 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 1474560 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 1474560 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 1474560 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1466368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1466368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1458176 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1458176 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1458176 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 1449984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 1449984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 1449984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1441792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1441792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 1433600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 1433600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 1433600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1425408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1425408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 1417216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 1417216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1400832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 1392640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 1392640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1384448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1384448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1384448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 1376256 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1368064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1368064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1368064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1368064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 1359872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 1359872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1351680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1351680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1351680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 1343488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 1343488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1335296 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1335296 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 1327104 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 1327104 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 1327104 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1318912 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1318912 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1310720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 1302528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 1302528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1294336 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1294336 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1294336 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 1286144 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 1286144 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1277952 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1269760 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1368064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1368064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1368064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 1359872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 1359872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1351680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1351680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1351680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 1343488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 1343488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1335296 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 1327104 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 1327104 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1318912 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1318912 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1310720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1310720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 1302528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 1302528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 1302528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1294336 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1294336 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1294336 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 1286144 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 1286144 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1277952 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1277952 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1269760 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1269760 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1261568 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1253376 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1253376 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1253376 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 1245184 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 1245184 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1236992 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 1228800 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 1228800 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1220608 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1220608 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1212416 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1212416 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1212416 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 1204224 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 1204224 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 1196032 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 1196032 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 1187840 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 1187840 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 1187840 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 1179648 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 1179648 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 1171456 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 1171456 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 1171456 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 1155072 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 1155072 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 1146880 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 1146880 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80642048 unmapped: 1138688 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80642048 unmapped: 1138688 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80642048 unmapped: 1138688 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80650240 unmapped: 1130496 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80650240 unmapped: 1130496 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80650240 unmapped: 1130496 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80658432 unmapped: 1122304 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80658432 unmapped: 1122304 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80666624 unmapped: 1114112 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80666624 unmapped: 1114112 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80674816 unmapped: 1105920 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80674816 unmapped: 1105920 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80674816 unmapped: 1105920 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 1097728 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 1097728 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 1097728 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 1089536 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 1089536 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1081344 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1081344 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 1073152 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1064960 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1064960 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1056768 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1056768 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1048576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1048576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1048576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1040384 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1040384 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 1032192 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 1032192 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 1032192 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 1024000 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 1024000 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 1015808 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 1015808 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 1015808 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 1007616 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 1007616 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 1007616 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 991232 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 991232 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 983040 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 983040 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 983040 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 974848 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 974848 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 966656 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 966656 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 958464 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 958464 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 958464 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 950272 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 950272 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 942080 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 942080 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 942080 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 942080 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 933888 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 933888 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 925696 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 925696 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 925696 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80863232 unmapped: 917504 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80863232 unmapped: 917504 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6995 writes, 28K keys, 6995 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6995 writes, 1406 syncs, 4.98 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6995 writes, 28K keys, 6995 commit groups, 1.0 writes per commit group, ingest: 19.58 MB, 0.03 MB/s#012Interval WAL: 6995 writes, 1406 syncs, 4.98 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 802816 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 802816 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 794624 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 794624 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 794624 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 786432 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 786432 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 778240 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 778240 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 778240 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 770048 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 770048 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 770048 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81018880 unmapped: 761856 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81018880 unmapped: 761856 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 753664 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 753664 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 753664 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81035264 unmapped: 745472 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81035264 unmapped: 745472 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 737280 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81051648 unmapped: 729088 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81051648 unmapped: 729088 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 720896 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 720896 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 712704 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 712704 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 712704 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 704512 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 704512 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 696320 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 696320 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 696320 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 688128 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 688128 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 679936 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 679936 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 679936 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 679936 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81108992 unmapped: 671744 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81108992 unmapped: 671744 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81108992 unmapped: 671744 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81108992 unmapped: 671744 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 663552 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 663552 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 663552 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 655360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 655360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 647168 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 647168 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 293.603179932s of 293.610778809s, submitted: 10
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 606208 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 655360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 655360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 655360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 655360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 655360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 655360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 655360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 655360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 647168 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 647168 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 638976 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 638976 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 630784 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 630784 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 622592 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 622592 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 622592 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 614400 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 614400 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 606208 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 606208 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 598016 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 598016 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 598016 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 589824 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 589824 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 581632 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 581632 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 573440 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 573440 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 573440 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 565248 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 565248 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 565248 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 557056 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 557056 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 548864 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 548864 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 548864 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 540672 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 540672 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 532480 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 532480 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 532480 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 524288 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 524288 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81264640 unmapped: 516096 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81264640 unmapped: 516096 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81264640 unmapped: 516096 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 507904 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 507904 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81281024 unmapped: 499712 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81281024 unmapped: 499712 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81281024 unmapped: 499712 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 491520 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 491520 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 475136 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 475136 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 475136 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 466944 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 466944 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 458752 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 458752 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 458752 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 450560 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 450560 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 434176 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 434176 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 434176 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 434176 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 393216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 393216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 393216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 393216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 393216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 393216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 393216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 393216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 393216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 393216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 393216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 385024 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 385024 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 385024 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 385024 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 ms_handle_reset con 0x560fe06e3c00 session 0x560fe01b0c40
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 ms_handle_reset con 0x560fe06e3400 session 0x560fe016dc00
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 300.071716309s of 300.105926514s, submitted: 90
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 352256 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 352256 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 352256 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 352256 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 352256 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 7219 writes, 28K keys, 7219 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7219 writes, 1518 syncs, 4.76 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 286720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 286720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 286720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 286720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 286720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 286720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 286720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 286720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 286720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 286720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 286720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 286720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 278528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 278528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 278528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 278528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 278528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 278528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 278528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 299.917694092s of 299.925079346s, submitted: 22
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 02:36:29 np0005558317 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
